Dec 02 20:11:56 crc systemd[1]: Starting Kubernetes Kubelet... Dec 02 20:11:56 crc restorecon[4689]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 20:11:56 crc restorecon[4689]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 20:11:56 crc restorecon[4689]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 02 20:11:57 crc kubenswrapper[4796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 20:11:57 crc kubenswrapper[4796]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 02 20:11:57 crc kubenswrapper[4796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 20:11:57 crc kubenswrapper[4796]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 20:11:57 crc kubenswrapper[4796]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 02 20:11:57 crc kubenswrapper[4796]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.116219 4796 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124090 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124135 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124144 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124155 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124163 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124172 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124181 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124189 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124197 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124205 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124213 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124221 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124229 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124241 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124287 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124297 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124306 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124315 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124323 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124334 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124348 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124360 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124371 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124381 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124391 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124401 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124447 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124459 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124470 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124479 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124488 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124497 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124509 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124518 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124527 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124535 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124543 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124552 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124561 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124569 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124577 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124587 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124595 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124602 4796 feature_gate.go:330] unrecognized feature gate: Example Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124610 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124618 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124626 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124635 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124644 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124652 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124660 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124668 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124675 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124684 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124691 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124699 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124707 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124717 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124725 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124733 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124741 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124749 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124756 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124764 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124772 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124782 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124792 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124802 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124810 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124821 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.124829 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.124996 4796 flags.go:64] FLAG: --address="0.0.0.0" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125014 4796 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125030 4796 flags.go:64] FLAG: --anonymous-auth="true" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125041 4796 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125053 4796 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125062 4796 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125074 4796 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125086 4796 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125096 4796 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125105 4796 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125115 4796 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125126 4796 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125135 4796 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125145 4796 flags.go:64] FLAG: --cgroup-root="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125154 4796 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125163 4796 flags.go:64] FLAG: --client-ca-file="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125172 4796 flags.go:64] FLAG: --cloud-config="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125180 4796 flags.go:64] FLAG: --cloud-provider="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125191 4796 flags.go:64] FLAG: --cluster-dns="[]" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125201 4796 flags.go:64] FLAG: --cluster-domain="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125210 4796 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125220 4796 flags.go:64] FLAG: --config-dir="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125229 4796 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125239 4796 flags.go:64] FLAG: --container-log-max-files="5" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125250 4796 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125289 4796 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125299 4796 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125309 4796 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125318 4796 flags.go:64] FLAG: --contention-profiling="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125328 4796 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125339 4796 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125351 4796 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125362 4796 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125376 4796 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125389 4796 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125400 4796 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125409 4796 flags.go:64] FLAG: --enable-load-reader="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125420 4796 flags.go:64] FLAG: --enable-server="true" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125429 4796 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125443 4796 flags.go:64] FLAG: --event-burst="100" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125453 4796 flags.go:64] FLAG: --event-qps="50" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125462 4796 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125471 4796 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125481 4796 flags.go:64] FLAG: --eviction-hard="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125503 4796 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125513 4796 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125522 4796 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125533 4796 flags.go:64] FLAG: --eviction-soft="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125542 4796 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125551 4796 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125562 4796 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125571 4796 flags.go:64] FLAG: --experimental-mounter-path="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125579 4796 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125588 4796 flags.go:64] FLAG: --fail-swap-on="true" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125597 4796 flags.go:64] FLAG: --feature-gates="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125609 4796 flags.go:64] FLAG: --file-check-frequency="20s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125619 4796 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125628 4796 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125637 4796 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125646 4796 flags.go:64] FLAG: --healthz-port="10248" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125657 4796 flags.go:64] FLAG: --help="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125666 4796 flags.go:64] FLAG: --hostname-override="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125675 4796 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125685 4796 flags.go:64] FLAG: --http-check-frequency="20s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125694 4796 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125703 4796 flags.go:64] FLAG: --image-credential-provider-config="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125712 4796 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125721 4796 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125730 4796 flags.go:64] FLAG: --image-service-endpoint="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125739 4796 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125749 4796 flags.go:64] FLAG: --kube-api-burst="100" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125758 4796 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125768 4796 flags.go:64] FLAG: --kube-api-qps="50" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125777 4796 flags.go:64] FLAG: --kube-reserved="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125786 4796 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125794 4796 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125804 4796 flags.go:64] FLAG: --kubelet-cgroups="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125813 4796 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125822 4796 flags.go:64] FLAG: --lock-file="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125831 4796 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125840 4796 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125849 4796 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125863 4796 flags.go:64] FLAG: --log-json-split-stream="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125873 4796 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125883 4796 flags.go:64] FLAG: --log-text-split-stream="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125893 4796 flags.go:64] FLAG: --logging-format="text" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125902 4796 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125913 4796 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125922 4796 flags.go:64] FLAG: --manifest-url="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125931 4796 flags.go:64] FLAG: --manifest-url-header="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125942 4796 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125952 4796 flags.go:64] FLAG: --max-open-files="1000000" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125963 4796 flags.go:64] FLAG: --max-pods="110" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125972 4796 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125981 4796 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125990 4796 flags.go:64] FLAG: --memory-manager-policy="None" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.125999 4796 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126009 4796 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126018 4796 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126027 4796 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126047 4796 flags.go:64] FLAG: --node-status-max-images="50" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126056 4796 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126066 4796 flags.go:64] FLAG: --oom-score-adj="-999" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126075 4796 flags.go:64] FLAG: --pod-cidr="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126084 4796 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126096 4796 flags.go:64] FLAG: --pod-manifest-path="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126106 4796 flags.go:64] FLAG: --pod-max-pids="-1" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126115 4796 flags.go:64] FLAG: --pods-per-core="0" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126124 4796 flags.go:64] FLAG: --port="10250" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126139 4796 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126148 4796 flags.go:64] FLAG: --provider-id="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126157 4796 flags.go:64] FLAG: --qos-reserved="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126167 4796 flags.go:64] FLAG: --read-only-port="10255" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126176 4796 flags.go:64] FLAG: --register-node="true" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126185 4796 flags.go:64] FLAG: --register-schedulable="true" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126198 4796 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126213 4796 flags.go:64] FLAG: --registry-burst="10" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126223 4796 flags.go:64] FLAG: --registry-qps="5" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126232 4796 flags.go:64] FLAG: --reserved-cpus="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126243 4796 flags.go:64] FLAG: --reserved-memory="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126283 4796 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126293 4796 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126304 4796 flags.go:64] FLAG: --rotate-certificates="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126313 4796 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126323 4796 flags.go:64] FLAG: --runonce="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126332 4796 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126344 4796 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126357 4796 flags.go:64] FLAG: --seccomp-default="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126368 4796 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126379 4796 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126391 4796 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126401 4796 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126411 4796 flags.go:64] FLAG: --storage-driver-password="root" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126420 4796 flags.go:64] FLAG: --storage-driver-secure="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126429 4796 flags.go:64] FLAG: --storage-driver-table="stats" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126438 4796 flags.go:64] FLAG: --storage-driver-user="root" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126447 4796 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126457 4796 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126466 4796 flags.go:64] FLAG: --system-cgroups="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126475 4796 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126490 4796 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126503 4796 flags.go:64] FLAG: --tls-cert-file="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126511 4796 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126523 4796 flags.go:64] FLAG: --tls-min-version="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126534 4796 flags.go:64] FLAG: --tls-private-key-file="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126545 4796 flags.go:64] FLAG: --topology-manager-policy="none" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126557 4796 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126572 4796 flags.go:64] FLAG: --topology-manager-scope="container" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126583 4796 flags.go:64] FLAG: --v="2" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126598 4796 flags.go:64] FLAG: --version="false" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126612 4796 flags.go:64] FLAG: --vmodule="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126626 4796 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.126638 4796 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.126889 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.126903 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.126917 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.126928 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.126937 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.126946 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.126958 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.126967 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.126977 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.126986 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.126995 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127004 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127014 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127024 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127033 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127042 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127052 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127060 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127070 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127079 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127093 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127102 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127111 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127120 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127129 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127138 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127152 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127162 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127172 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127182 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127192 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127202 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127212 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127223 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127232 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127242 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127286 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127297 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127311 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127321 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127331 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127341 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127351 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127362 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127372 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127383 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127393 4796 feature_gate.go:330] unrecognized feature gate: Example Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127408 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127419 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127430 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127441 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127452 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127467 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127479 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127490 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127503 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127515 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127528 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127543 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127554 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127565 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127575 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127585 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127595 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127605 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127615 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127624 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127633 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127644 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127653 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.127662 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.127691 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.135917 4796 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.135956 4796 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136045 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136054 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136061 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136067 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136072 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136077 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136083 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136088 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136093 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136098 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136104 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136109 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136113 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136120 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136128 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136133 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136138 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136143 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136149 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136155 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136161 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136167 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136174 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136182 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136188 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136193 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136198 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136204 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136209 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136215 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136220 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136225 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136229 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136235 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136241 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136246 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136267 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136272 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136277 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136282 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136287 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136293 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136297 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136304 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136310 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136316 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136321 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136327 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136332 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136337 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136342 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136347 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136352 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136357 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136362 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136367 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136372 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136377 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136382 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136387 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136392 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136431 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136437 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136441 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136446 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136452 4796 feature_gate.go:330] unrecognized feature gate: Example Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136458 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136464 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136470 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136476 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136482 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.136491 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136676 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136685 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136692 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136699 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136704 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136710 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136715 4796 feature_gate.go:330] unrecognized feature gate: Example Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136720 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136725 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136730 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136735 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136740 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136745 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136752 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136758 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136763 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136769 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136774 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136780 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136785 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136791 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136796 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136801 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136807 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136811 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136817 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136821 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136826 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136831 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136836 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136841 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136846 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136851 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136856 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136862 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136867 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136872 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136879 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136884 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136889 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136894 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136899 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136904 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136909 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136915 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136920 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136925 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136930 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136935 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136942 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136949 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136955 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136961 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136967 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136973 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136978 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136983 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136988 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136993 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.136998 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.137004 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.137009 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.137014 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.137019 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.137024 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.137030 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.137036 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.137041 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.137045 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.137050 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.137061 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.137069 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.137317 4796 server.go:940] "Client rotation is on, will bootstrap in background" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.140326 4796 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.140466 4796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.141328 4796 server.go:997] "Starting client certificate rotation" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.141355 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.141821 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-04 20:54:49.009642698 +0000 UTC Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.141902 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 48h42m51.867743033s for next certificate rotation Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.146306 4796 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.148143 4796 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.155339 4796 log.go:25] "Validated CRI v1 runtime API" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.175958 4796 log.go:25] "Validated CRI v1 image API" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.177812 4796 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.180127 4796 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-02-20-08-28-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.180173 4796 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.200563 4796 manager.go:217] Machine: {Timestamp:2025-12-02 20:11:57.198968927 +0000 UTC m=+0.202344491 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9fca617c-b4ed-442d-9d01-94fab08be868 BootID:45bad139-cd57-490c-a638-731a42709a6c Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ee:a3:be Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ee:a3:be Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:04:f5:5e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:78:39:56 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a0:03:1e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:19:23:29 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:02:46:95:2b:67:55 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4e:42:80:3d:f3:e7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.201096 4796 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.201310 4796 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.201624 4796 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.201843 4796 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.201885 4796 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.202108 4796 topology_manager.go:138] "Creating topology manager with none policy" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.202124 4796 container_manager_linux.go:303] "Creating device plugin manager" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.202368 4796 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.202416 4796 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.202651 4796 state_mem.go:36] "Initialized new in-memory state store" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.203010 4796 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.203684 4796 kubelet.go:418] "Attempting to sync node with API server" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.203711 4796 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.203737 4796 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.203754 4796 kubelet.go:324] "Adding apiserver pod source" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.203768 4796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.205488 4796 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.205837 4796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.206689 4796 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207219 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207250 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207277 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207286 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207300 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.207242 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207310 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207397 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207412 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207423 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207434 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207448 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207457 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.207376 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.207476 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.207656 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.207902 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.208429 4796 server.go:1280] "Started kubelet" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.208982 4796 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.209494 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.208819 4796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.210281 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.210346 4796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.210438 4796 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 02 20:11:57 crc systemd[1]: Started Kubernetes Kubelet. Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.210595 4796 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.210739 4796 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.210756 4796 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.211768 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:47:41.36866685 +0000 UTC Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.212476 4796 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.213459 4796 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.213481 4796 factory.go:55] Registering systemd factory Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.213495 4796 factory.go:221] Registration of the systemd container factory successfully Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.214120 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="200ms" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.214946 4796 factory.go:153] Registering CRI-O factory Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.214967 4796 factory.go:221] Registration of the crio container factory successfully Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.213608 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.241:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d7f0bb2180fb5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 20:11:57.208391605 +0000 UTC m=+0.211767159,LastTimestamp:2025-12-02 20:11:57.208391605 +0000 UTC m=+0.211767159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.214994 4796 factory.go:103] Registering Raw factory Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.215103 4796 manager.go:1196] Started watching for new ooms in manager Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.215411 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.215484 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.219834 4796 server.go:460] "Adding debug handlers to kubelet server" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.220207 4796 manager.go:319] Starting recovery of all containers Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228636 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228716 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228737 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228754 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228769 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228784 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228801 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228817 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228835 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228852 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228869 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228887 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228905 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228923 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228938 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228956 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228976 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.228993 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229010 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229029 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229047 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229065 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229084 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229100 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229119 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229137 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229215 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229240 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229283 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229304 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229321 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229340 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229359 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229380 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229398 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229416 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229435 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229452 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229469 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229487 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229506 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229523 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229541 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229559 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229577 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229597 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229613 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229630 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229648 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229666 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229684 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229702 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229729 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229748 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229767 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229786 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229805 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229824 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229841 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229857 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229874 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229893 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229911 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229929 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229949 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229968 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.229985 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230001 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230020 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230037 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230053 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230070 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230087 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230105 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230121 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230140 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230157 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230176 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230194 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230211 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230229 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230248 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230289 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230307 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230324 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230344 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230362 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230378 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230397 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230414 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230434 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230450 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230466 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230481 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230496 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230514 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230533 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230549 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230565 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230581 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230597 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230613 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230629 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230645 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230671 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230689 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230707 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230728 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230746 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230764 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230783 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230800 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230820 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230837 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230855 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230871 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230886 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230902 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230918 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230936 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230953 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230969 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.230985 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231001 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231019 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231034 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231048 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231727 4796 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231764 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231784 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231801 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231817 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231835 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231851 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231866 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231883 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231900 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231918 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231934 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231950 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231963 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231978 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.231994 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.234539 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.234609 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.234664 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.234697 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.234728 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.234774 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.234802 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.234841 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.234882 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.234924 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.234965 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.234992 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235027 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235054 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235080 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235119 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235144 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235179 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235204 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235230 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235297 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235323 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235356 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235379 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235411 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235446 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235469 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235497 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235531 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235557 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235586 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235610 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235637 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235674 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235702 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235732 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235757 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235782 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235826 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235859 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235917 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235948 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.235971 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.236001 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.236086 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.236151 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.236805 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.236857 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.236890 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.236914 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.236943 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.236967 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237010 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237040 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237060 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237088 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237108 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237134 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237159 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237185 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237209 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237230 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237276 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237308 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237334 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237357 4796 reconstruct.go:97] "Volume reconstruction finished" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237370 4796 reconciler.go:26] "Reconciler: start to sync state" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.237203 4796 manager.go:324] Recovery completed Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.251929 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.255656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.255714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.255805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.256759 4796 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.256783 4796 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.256808 4796 state_mem.go:36] "Initialized new in-memory state store" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.261694 4796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.263639 4796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.263685 4796 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.263725 4796 kubelet.go:2335] "Starting kubelet main sync loop" Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.263775 4796 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.266014 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.266088 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.266795 4796 policy_none.go:49] "None policy: Start" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.267636 4796 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.267674 4796 state_mem.go:35] "Initializing new in-memory state store" Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.311449 4796 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.317496 4796 manager.go:334] "Starting Device Plugin manager" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.317566 4796 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.317581 4796 server.go:79] "Starting device plugin registration server" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.318447 4796 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.318467 4796 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.318925 4796 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.319224 4796 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.319284 4796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.327540 4796 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.364015 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.364170 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.364925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.364952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.364963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.365058 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.365333 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.365441 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.365759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.365806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.365822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.366042 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.366106 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.366130 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.366205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.366226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.366234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.367010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.367046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.367060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.367192 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.367225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.367245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.367395 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.367526 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.367555 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.368323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.368337 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.368348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.368364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.368370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.368381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.368515 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.368712 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.368751 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.369123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.369147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.369159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.369343 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.369385 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.369732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.369756 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.369767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.369956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.369979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.369990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.415299 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="400ms" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.418890 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.421681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.421732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.421745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.421773 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.422419 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439480 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439517 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439538 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439555 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439578 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439600 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439630 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439656 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439776 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439827 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439901 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439935 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.439961 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.440022 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.440089 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541555 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541612 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541635 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541652 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541671 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541687 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541702 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541717 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541737 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541751 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541747 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541807 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541793 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541767 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541865 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541879 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541887 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541924 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541896 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541990 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541999 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.541998 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.542015 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.542032 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.542043 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.542075 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.542107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.542128 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.542150 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.542239 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.623467 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.625244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.625427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.625557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.625875 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.626555 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.697558 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.716003 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.723425 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.727844 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-0b70483b13aeae92f5741a276add8d8e709816b3a6dbda3d9b7d91d304840d9f WatchSource:0}: Error finding container 0b70483b13aeae92f5741a276add8d8e709816b3a6dbda3d9b7d91d304840d9f: Status 404 returned error can't find the container with id 0b70483b13aeae92f5741a276add8d8e709816b3a6dbda3d9b7d91d304840d9f Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.732894 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d278835a49faaaf87bb0a37f8ff1ea066695046f882a015b05df6d1c0c18ff77 WatchSource:0}: Error finding container d278835a49faaaf87bb0a37f8ff1ea066695046f882a015b05df6d1c0c18ff77: Status 404 returned error can't find the container with id d278835a49faaaf87bb0a37f8ff1ea066695046f882a015b05df6d1c0c18ff77 Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.755897 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: I1202 20:11:57.764994 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.770818 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4acc71d40e266945e98d12668c65cf9091c5a8602520a34b41fbcd44c0a1cc3e WatchSource:0}: Error finding container 4acc71d40e266945e98d12668c65cf9091c5a8602520a34b41fbcd44c0a1cc3e: Status 404 returned error can't find the container with id 4acc71d40e266945e98d12668c65cf9091c5a8602520a34b41fbcd44c0a1cc3e Dec 02 20:11:57 crc kubenswrapper[4796]: W1202 20:11:57.779497 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-fe9bd72fcd36c72e6d2e0589a663cbdd0cec6e210821e17e487929b99fe133ba WatchSource:0}: Error finding container fe9bd72fcd36c72e6d2e0589a663cbdd0cec6e210821e17e487929b99fe133ba: Status 404 returned error can't find the container with id fe9bd72fcd36c72e6d2e0589a663cbdd0cec6e210821e17e487929b99fe133ba Dec 02 20:11:57 crc kubenswrapper[4796]: E1202 20:11:57.816290 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="800ms" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.027273 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:58 crc kubenswrapper[4796]: W1202 20:11:58.027729 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Dec 02 20:11:58 crc kubenswrapper[4796]: E1202 20:11:58.027800 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.028973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.029005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.029016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.029041 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 20:11:58 crc kubenswrapper[4796]: E1202 20:11:58.029621 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Dec 02 20:11:58 crc kubenswrapper[4796]: W1202 20:11:58.103358 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Dec 02 20:11:58 crc kubenswrapper[4796]: E1202 20:11:58.103439 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Dec 02 20:11:58 crc kubenswrapper[4796]: W1202 20:11:58.194451 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Dec 02 20:11:58 crc kubenswrapper[4796]: E1202 20:11:58.194531 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.210980 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.212154 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:17:28.443662548 +0000 UTC Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.212206 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 915h5m30.231458494s for next certificate rotation Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.270966 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8" exitCode=0 Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.271110 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8"} Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.271326 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f55710e841594fe6fd01e7809e13eb2b6cd74e56fcf4dbe7e821cf0d13c901e5"} Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.271507 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.273230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.273299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.273311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.273457 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e" exitCode=0 Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.273535 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e"} Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.273574 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d278835a49faaaf87bb0a37f8ff1ea066695046f882a015b05df6d1c0c18ff77"} Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.273703 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.278615 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.278638 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.278648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.279666 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.280283 4796 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84" exitCode=0 Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.280342 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84"} Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.280364 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0b70483b13aeae92f5741a276add8d8e709816b3a6dbda3d9b7d91d304840d9f"} Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.280431 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.280810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.280843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.280861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.281214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.281233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.281242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.282937 4796 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad" exitCode=0 Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.283000 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad"} Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.283034 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fe9bd72fcd36c72e6d2e0589a663cbdd0cec6e210821e17e487929b99fe133ba"} Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.283112 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.283941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.283967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.283978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.285671 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840"} Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.285722 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4acc71d40e266945e98d12668c65cf9091c5a8602520a34b41fbcd44c0a1cc3e"} Dec 02 20:11:58 crc kubenswrapper[4796]: E1202 20:11:58.618067 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="1.6s" Dec 02 20:11:58 crc kubenswrapper[4796]: W1202 20:11:58.770884 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Dec 02 20:11:58 crc kubenswrapper[4796]: E1202 20:11:58.771206 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.830073 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.831843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.831865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.831873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:58 crc kubenswrapper[4796]: I1202 20:11:58.831891 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 20:11:58 crc kubenswrapper[4796]: E1202 20:11:58.832285 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.291601 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6f232da7c9328c076303d93cdcc696870504e56361a76a1e8c3222978856d6e6"} Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.291729 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.292894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.292924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.292933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.295358 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ae8793493573dfb14c86ac89098d6872b1833e33f246947bedd5e5308a3d656f"} Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.295393 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d6d502f34530d387d0dca7b89c9b7cd1fea6e06b6ddf51ab1c58daefd8265e5a"} Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.295404 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cd6d808775c86045c4d0142fbb6f4e015251902b10a37c70bd2804c260c72b7a"} Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.295480 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.296132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.296156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.296165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.298659 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42"} Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.298684 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193"} Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.298695 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798"} Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.298721 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.299522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.299547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.299557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.301580 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086"} Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.301611 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952"} Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.301652 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db"} Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.301664 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6"} Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.303629 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5" exitCode=0 Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.303656 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5"} Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.303812 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.304422 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.304449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:11:59 crc kubenswrapper[4796]: I1202 20:11:59.304472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.309818 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7" exitCode=0 Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.309918 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7"} Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.310096 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.311041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.311069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.311078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.313128 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca"} Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.313177 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.313181 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.314193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.314230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.314227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.314246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.314301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.314344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.432438 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.433367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.433396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.433415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:00 crc kubenswrapper[4796]: I1202 20:12:00.433435 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.193968 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.321021 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488"} Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.321078 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed"} Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.321088 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d"} Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.321040 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.321176 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.321096 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5"} Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.321269 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a"} Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.321286 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.321364 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.322348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.322347 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.322370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.322379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.322389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.322381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.322627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.322644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.322655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.360224 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.422954 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:12:01 crc kubenswrapper[4796]: I1202 20:12:01.432663 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.323357 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.323397 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.324830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.324870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.324881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.325315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.325364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.325382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.580142 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.580752 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.581050 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.583970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.584040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:02 crc kubenswrapper[4796]: I1202 20:12:02.584061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:03 crc kubenswrapper[4796]: I1202 20:12:03.325933 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:03 crc kubenswrapper[4796]: I1202 20:12:03.326059 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:03 crc kubenswrapper[4796]: I1202 20:12:03.327108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:03 crc kubenswrapper[4796]: I1202 20:12:03.327178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:03 crc kubenswrapper[4796]: I1202 20:12:03.327207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:03 crc kubenswrapper[4796]: I1202 20:12:03.327581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:03 crc kubenswrapper[4796]: I1202 20:12:03.327666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:03 crc kubenswrapper[4796]: I1202 20:12:03.327691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:05 crc kubenswrapper[4796]: I1202 20:12:05.058024 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:12:05 crc kubenswrapper[4796]: I1202 20:12:05.058231 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 20:12:05 crc kubenswrapper[4796]: I1202 20:12:05.058353 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:05 crc kubenswrapper[4796]: I1202 20:12:05.059653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:05 crc kubenswrapper[4796]: I1202 20:12:05.059785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:05 crc kubenswrapper[4796]: I1202 20:12:05.059824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:07 crc kubenswrapper[4796]: I1202 20:12:07.021578 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:12:07 crc kubenswrapper[4796]: I1202 20:12:07.021847 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:07 crc kubenswrapper[4796]: I1202 20:12:07.023893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:07 crc kubenswrapper[4796]: I1202 20:12:07.023938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:07 crc kubenswrapper[4796]: I1202 20:12:07.023950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:07 crc kubenswrapper[4796]: E1202 20:12:07.327727 4796 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 02 20:12:07 crc kubenswrapper[4796]: I1202 20:12:07.345132 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:12:07 crc kubenswrapper[4796]: I1202 20:12:07.345460 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:07 crc kubenswrapper[4796]: I1202 20:12:07.346942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:07 crc kubenswrapper[4796]: I1202 20:12:07.347015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:07 crc kubenswrapper[4796]: I1202 20:12:07.347055 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.194108 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.194302 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.195379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.195434 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.195453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.199144 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.338462 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.339236 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.339289 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.339299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.364519 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.364718 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.365953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.365986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:08 crc kubenswrapper[4796]: I1202 20:12:08.365996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:09 crc kubenswrapper[4796]: I1202 20:12:09.211497 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 02 20:12:09 crc kubenswrapper[4796]: I1202 20:12:09.788399 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 02 20:12:09 crc kubenswrapper[4796]: I1202 20:12:09.788591 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:09 crc kubenswrapper[4796]: I1202 20:12:09.789721 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:09 crc kubenswrapper[4796]: I1202 20:12:09.789759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:09 crc kubenswrapper[4796]: I1202 20:12:09.789776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:10 crc kubenswrapper[4796]: I1202 20:12:10.026622 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 02 20:12:10 crc kubenswrapper[4796]: I1202 20:12:10.026700 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 02 20:12:10 crc kubenswrapper[4796]: I1202 20:12:10.033978 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 02 20:12:10 crc kubenswrapper[4796]: I1202 20:12:10.034042 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 02 20:12:11 crc kubenswrapper[4796]: I1202 20:12:11.194909 4796 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 20:12:11 crc kubenswrapper[4796]: I1202 20:12:11.195212 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 20:12:12 crc kubenswrapper[4796]: I1202 20:12:12.586348 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:12:12 crc kubenswrapper[4796]: I1202 20:12:12.586615 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:12 crc kubenswrapper[4796]: I1202 20:12:12.588099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:12 crc kubenswrapper[4796]: I1202 20:12:12.588128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:12 crc kubenswrapper[4796]: I1202 20:12:12.588135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:12 crc kubenswrapper[4796]: I1202 20:12:12.592050 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:12:13 crc kubenswrapper[4796]: I1202 20:12:13.349120 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:13 crc kubenswrapper[4796]: I1202 20:12:13.350504 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:13 crc kubenswrapper[4796]: I1202 20:12:13.350570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:13 crc kubenswrapper[4796]: I1202 20:12:13.350589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.020430 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.022202 4796 trace.go:236] Trace[892114135]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 20:12:00.844) (total time: 14177ms): Dec 02 20:12:15 crc kubenswrapper[4796]: Trace[892114135]: ---"Objects listed" error: 14177ms (20:12:15.022) Dec 02 20:12:15 crc kubenswrapper[4796]: Trace[892114135]: [14.177263781s] [14.177263781s] END Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.022245 4796 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.056667 4796 trace.go:236] Trace[752571429]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 20:12:01.150) (total time: 13905ms): Dec 02 20:12:15 crc kubenswrapper[4796]: Trace[752571429]: ---"Objects listed" error: 13905ms (20:12:15.056) Dec 02 20:12:15 crc kubenswrapper[4796]: Trace[752571429]: [13.90565165s] [13.90565165s] END Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.056714 4796 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.056719 4796 trace.go:236] Trace[956408575]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 20:12:00.662) (total time: 14394ms): Dec 02 20:12:15 crc kubenswrapper[4796]: Trace[956408575]: ---"Objects listed" error: 14394ms (20:12:15.056) Dec 02 20:12:15 crc kubenswrapper[4796]: Trace[956408575]: [14.394484129s] [14.394484129s] END Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.056737 4796 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.056798 4796 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.057577 4796 trace.go:236] Trace[1336887317]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 20:12:00.261) (total time: 14795ms): Dec 02 20:12:15 crc kubenswrapper[4796]: Trace[1336887317]: ---"Objects listed" error: 14795ms (20:12:15.057) Dec 02 20:12:15 crc kubenswrapper[4796]: Trace[1336887317]: [14.795669153s] [14.795669153s] END Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.057620 4796 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.058750 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.083969 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46732->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.084031 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46732->192.168.126.11:17697: read: connection reset by peer" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.084410 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.084464 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.214633 4796 apiserver.go:52] "Watching apiserver" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.216934 4796 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.217301 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.217656 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.217818 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.217919 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.217826 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.218321 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.218345 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.218385 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.218443 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.218479 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.219379 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.219762 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.219782 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.219799 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.219767 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.219892 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.220140 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.220404 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.220692 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.245446 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.255426 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.273795 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.291977 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.307452 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.313631 4796 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.319175 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.342232 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.354132 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.355939 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca" exitCode=255 Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.356032 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca"} Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.358405 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.358543 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.358612 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.358673 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.358744 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.358824 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.358886 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.358928 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.358951 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359022 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359048 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359073 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359175 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359211 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359233 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359278 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359303 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359308 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359329 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359355 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359381 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359408 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359432 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359455 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359479 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359500 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359524 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359551 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359575 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359600 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359623 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359647 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359674 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359699 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359724 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359749 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359772 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359797 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359823 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359352 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359405 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359500 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359560 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359590 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359673 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359689 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359770 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359809 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.359850 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:12:15.85983084 +0000 UTC m=+18.863206374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361399 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361473 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361537 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361719 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361865 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361993 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362056 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362117 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362175 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362240 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362330 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362395 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362458 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362521 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362582 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362650 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362713 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362782 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362844 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362906 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362980 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363043 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363106 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363171 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363231 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363333 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363399 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363711 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363782 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363844 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363911 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363980 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364053 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364130 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364204 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364288 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361399 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361463 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360101 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364376 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360206 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360233 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360340 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360401 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360452 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360508 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360518 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360528 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360655 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360666 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360791 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360807 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360909 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.360931 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361089 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361151 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361294 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361319 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361481 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.359948 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361515 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.361590 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362725 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362906 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.362970 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363160 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363206 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363224 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363238 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363288 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363395 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363396 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363724 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363826 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363873 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.363827 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364014 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364103 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364239 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364690 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364308 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364325 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364836 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365106 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365142 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.364352 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365331 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365349 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365365 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365382 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365401 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365416 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365436 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365457 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365476 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365492 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365509 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365525 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365546 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365570 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365588 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365607 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365626 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365647 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365663 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365716 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365733 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365749 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365764 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365779 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365794 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365809 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365824 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365841 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365858 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365874 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365889 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365905 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365921 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365937 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365953 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365969 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.365987 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366003 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366018 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366031 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366047 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366061 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366077 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366092 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366108 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366125 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366140 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366157 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366172 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366187 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366201 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366215 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366232 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366261 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366280 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366296 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366360 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366378 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366393 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366408 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366425 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366440 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366458 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366478 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366497 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366520 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366535 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366551 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366566 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366581 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366597 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366648 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366664 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366681 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366697 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366714 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366730 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366747 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366762 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366777 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366792 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366808 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366822 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366838 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366854 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366869 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366885 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366901 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366919 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366935 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366950 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366966 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.366984 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367010 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367009 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367027 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367044 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367061 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367079 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367142 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367159 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367175 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367191 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367208 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367225 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367243 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367273 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367299 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367314 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367331 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367346 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367365 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367381 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367396 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367412 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367427 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367444 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367459 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367476 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367494 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367530 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367546 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367585 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367608 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367629 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367650 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367667 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367686 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367707 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367727 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367743 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367760 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367781 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367799 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367816 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367834 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367891 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367903 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367913 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367923 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367933 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367943 4796 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367952 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367961 4796 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367971 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367980 4796 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367990 4796 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368000 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368010 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368019 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368028 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368039 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368048 4796 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368058 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368067 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368076 4796 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368085 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368094 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368104 4796 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368113 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368122 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368131 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368141 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368152 4796 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368161 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368169 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368179 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368189 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368198 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368208 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368217 4796 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368226 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368236 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368245 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368279 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368289 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368300 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368309 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368319 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368329 4796 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368338 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368347 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368356 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368367 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368376 4796 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367221 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367591 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367661 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.367842 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368077 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368132 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368368 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368429 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368493 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368575 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368652 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368729 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368756 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368779 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368789 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368864 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368970 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.369019 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.369090 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.369318 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.369385 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.369520 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.369722 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.369871 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370000 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370150 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370212 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370517 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370554 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370601 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370616 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370673 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370810 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370818 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370830 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370857 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.370960 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.371011 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.371047 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.371231 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.371407 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.371673 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.371759 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.371808 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.371903 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.371979 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.372441 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.372527 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.372594 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.372813 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.372822 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.372864 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.372915 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.373021 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.373077 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.373134 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.373176 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.373195 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.373499 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.373642 4796 scope.go:117] "RemoveContainer" containerID="84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.373656 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.373779 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.373924 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.374141 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.374149 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.374338 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.374391 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.374432 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.374678 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.374725 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.374772 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.374788 4796 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.374868 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.374966 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.375098 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.375887 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.376122 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.376345 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.376521 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.376775 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.377160 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.377285 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.377328 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.377644 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.377651 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.377879 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.377928 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.377931 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.378094 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.378146 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.368385 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.378150 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.378237 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:15.87822114 +0000 UTC m=+18.881596674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.378404 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.378421 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.378676 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.378689 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.378767 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.378768 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.379010 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.379081 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.379297 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.379369 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.379375 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.379402 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.379605 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.379651 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.379628 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.379726 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.379914 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.379987 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.380076 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.380240 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.380359 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.380650 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.380807 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.380906 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.380934 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.380965 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.381099 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.381337 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.381407 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.381486 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.381541 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.381545 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.381736 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.384244 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.384403 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:15.884383001 +0000 UTC m=+18.887758605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.384744 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.384777 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.384792 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.384837 4796 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.384851 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.384864 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.384876 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.384887 4796 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.384911 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.384924 4796 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.384936 4796 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.385416 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.390018 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.391486 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.393427 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.393545 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.393862 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.399786 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.400207 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.401501 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.401565 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.401655 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.401851 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.401855 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.401873 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.401888 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.401963 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.401976 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:15.901955922 +0000 UTC m=+18.905331556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.402184 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.402374 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.402394 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.402406 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.402455 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:15.902436143 +0000 UTC m=+18.905811677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.402603 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.405728 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.406747 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.407562 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.413091 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.416341 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.418304 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.421352 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.427783 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.430168 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.430463 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.439026 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.447809 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486218 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486301 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486366 4796 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486376 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486385 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486393 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486401 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486409 4796 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486417 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486425 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486434 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486443 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486451 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486461 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486464 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486480 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486472 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486557 4796 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486579 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486676 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486690 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486709 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486721 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486732 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486744 4796 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486755 4796 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486766 4796 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486776 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486788 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486801 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486813 4796 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486824 4796 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486839 4796 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486851 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486863 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486874 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486885 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486896 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486907 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486918 4796 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486929 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486941 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486951 4796 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486962 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486973 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486984 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.486996 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487009 4796 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487020 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487031 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487044 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487055 4796 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487066 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487078 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487089 4796 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487135 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487148 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487159 4796 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487170 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487183 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487194 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487206 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487216 4796 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487227 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487238 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487267 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487280 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487292 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487303 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487341 4796 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487355 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487367 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487378 4796 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487389 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487400 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487411 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487422 4796 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487433 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487444 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487455 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487466 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487477 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487500 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487512 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487522 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487543 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487563 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487576 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487587 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487598 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487610 4796 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487621 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487632 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487643 4796 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487653 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487664 4796 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487675 4796 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487687 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487699 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487710 4796 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487721 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487733 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487745 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487756 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487767 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487779 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487791 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487802 4796 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487813 4796 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487823 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487834 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487845 4796 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487857 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487868 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487879 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487892 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487903 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487914 4796 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487930 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487940 4796 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487953 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487964 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487975 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487986 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.487998 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.488009 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.488021 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.488032 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.488044 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.488058 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.488069 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.488080 4796 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.488091 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.488103 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.488115 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.488130 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.488143 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.531391 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.537884 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.544295 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 20:12:15 crc kubenswrapper[4796]: W1202 20:12:15.544956 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-09d7acfb51bf3ed538f67a9a033542b2213556dabe04a8032feffbbdf962b11a WatchSource:0}: Error finding container 09d7acfb51bf3ed538f67a9a033542b2213556dabe04a8032feffbbdf962b11a: Status 404 returned error can't find the container with id 09d7acfb51bf3ed538f67a9a033542b2213556dabe04a8032feffbbdf962b11a Dec 02 20:12:15 crc kubenswrapper[4796]: W1202 20:12:15.552175 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-636ae480927d899f7d8aa7cbd88ba95fe9a62fc9557cbaddcfd2eb2413de4c9b WatchSource:0}: Error finding container 636ae480927d899f7d8aa7cbd88ba95fe9a62fc9557cbaddcfd2eb2413de4c9b: Status 404 returned error can't find the container with id 636ae480927d899f7d8aa7cbd88ba95fe9a62fc9557cbaddcfd2eb2413de4c9b Dec 02 20:12:15 crc kubenswrapper[4796]: W1202 20:12:15.560144 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-dbb7860527387866e65c744d6f62c439bc461a88a065e703b5ac5ef94cb3c06b WatchSource:0}: Error finding container dbb7860527387866e65c744d6f62c439bc461a88a065e703b5ac5ef94cb3c06b: Status 404 returned error can't find the container with id dbb7860527387866e65c744d6f62c439bc461a88a065e703b5ac5ef94cb3c06b Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.891764 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.891870 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.891900 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.891997 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:12:16.891964818 +0000 UTC m=+19.895340362 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.892005 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.892037 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.892074 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:16.89206431 +0000 UTC m=+19.895439844 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.892099 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:16.892083171 +0000 UTC m=+19.895458725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.992477 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:15 crc kubenswrapper[4796]: I1202 20:12:15.992538 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.992646 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.992659 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.992669 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.992708 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:16.992696366 +0000 UTC m=+19.996071900 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.992918 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.992935 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.992944 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:15 crc kubenswrapper[4796]: E1202 20:12:15.992969 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:16.992961412 +0000 UTC m=+19.996336946 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.126234 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.360447 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"dbb7860527387866e65c744d6f62c439bc461a88a065e703b5ac5ef94cb3c06b"} Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.362758 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c"} Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.362806 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c"} Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.362818 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"636ae480927d899f7d8aa7cbd88ba95fe9a62fc9557cbaddcfd2eb2413de4c9b"} Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.365112 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b"} Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.365179 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"09d7acfb51bf3ed538f67a9a033542b2213556dabe04a8032feffbbdf962b11a"} Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.368028 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.370542 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc"} Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.370786 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.387705 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.406317 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.420099 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.434033 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.445890 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.460327 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.470909 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.480724 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.495083 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.513083 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.525867 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.536043 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.548221 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.560132 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:16Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.900526 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.900609 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:16 crc kubenswrapper[4796]: I1202 20:12:16.900640 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:16 crc kubenswrapper[4796]: E1202 20:12:16.900707 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:16 crc kubenswrapper[4796]: E1202 20:12:16.900737 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:12:18.900702874 +0000 UTC m=+21.904078408 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:12:16 crc kubenswrapper[4796]: E1202 20:12:16.900756 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:16 crc kubenswrapper[4796]: E1202 20:12:16.900786 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:18.900770286 +0000 UTC m=+21.904145820 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:16 crc kubenswrapper[4796]: E1202 20:12:16.900812 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:18.900796626 +0000 UTC m=+21.904172180 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.001170 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.001234 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:17 crc kubenswrapper[4796]: E1202 20:12:17.001292 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:17 crc kubenswrapper[4796]: E1202 20:12:17.001312 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:17 crc kubenswrapper[4796]: E1202 20:12:17.001322 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:17 crc kubenswrapper[4796]: E1202 20:12:17.001371 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:19.00135579 +0000 UTC m=+22.004731324 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:17 crc kubenswrapper[4796]: E1202 20:12:17.001450 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:17 crc kubenswrapper[4796]: E1202 20:12:17.001487 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:17 crc kubenswrapper[4796]: E1202 20:12:17.001502 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:17 crc kubenswrapper[4796]: E1202 20:12:17.001605 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:19.001583996 +0000 UTC m=+22.004959600 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.264313 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.264548 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.264584 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:17 crc kubenswrapper[4796]: E1202 20:12:17.264551 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:17 crc kubenswrapper[4796]: E1202 20:12:17.264696 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:17 crc kubenswrapper[4796]: E1202 20:12:17.264829 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.272533 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.273693 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.275941 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.276786 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.277401 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.277899 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.278500 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.279007 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.279684 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.280202 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.280687 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.280954 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.281320 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.281800 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.282319 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.282821 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.283328 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.283893 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.285034 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.287054 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.288503 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.290483 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.291906 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.292855 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.294474 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.294932 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.295638 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.296350 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.296839 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.297461 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.297925 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.298527 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.299343 4796 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.299574 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.302547 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.303068 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.303534 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.304897 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.305634 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.306160 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.306862 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.307562 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.308054 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.308696 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.309372 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.310153 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.313137 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.313839 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.314413 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.315500 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.316103 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.316688 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.317213 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.317890 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.318803 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.319423 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.320097 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.339725 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.357411 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.372807 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:17 crc kubenswrapper[4796]: I1202 20:12:17.387598 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.199015 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.203479 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.209152 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.212619 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.229062 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.241601 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.256637 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.258900 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.260573 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.260602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.260612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.260684 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.267283 4796 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.267566 4796 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.268500 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.268526 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.268535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.268545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.268555 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:18Z","lastTransitionTime":"2025-12-02T20:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.272007 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.287451 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: E1202 20:12:18.289399 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.292531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.292564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.292572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.292586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.292596 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:18Z","lastTransitionTime":"2025-12-02T20:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.299787 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: E1202 20:12:18.305333 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.311646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.311681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.311691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.311705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.311715 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:18Z","lastTransitionTime":"2025-12-02T20:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.314561 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: E1202 20:12:18.328344 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.330142 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.332012 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.332048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.332060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.332079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.332092 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:18Z","lastTransitionTime":"2025-12-02T20:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:18 crc kubenswrapper[4796]: E1202 20:12:18.344870 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.346608 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.349124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.349267 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.349360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.349470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.349572 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:18Z","lastTransitionTime":"2025-12-02T20:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.358785 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: E1202 20:12:18.362359 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: E1202 20:12:18.362627 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.364075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.364104 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.364113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.364129 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.364141 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:18Z","lastTransitionTime":"2025-12-02T20:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.371237 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.376032 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b"} Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.388289 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.400008 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.411356 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.426130 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.441598 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.455481 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.466217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.466297 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.466318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.466345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.466366 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:18Z","lastTransitionTime":"2025-12-02T20:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.467742 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.478862 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.492654 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.503090 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.514727 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:18Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.568589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.568624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.568633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.568647 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.568657 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:18Z","lastTransitionTime":"2025-12-02T20:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.671674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.671941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.672048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.672173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.672371 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:18Z","lastTransitionTime":"2025-12-02T20:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.774768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.775102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.775209 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.775344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.775440 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:18Z","lastTransitionTime":"2025-12-02T20:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.878011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.878049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.878061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.878077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.878090 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:18Z","lastTransitionTime":"2025-12-02T20:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.917381 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.917455 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.917496 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:18 crc kubenswrapper[4796]: E1202 20:12:18.917563 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:18 crc kubenswrapper[4796]: E1202 20:12:18.917612 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:22.917596982 +0000 UTC m=+25.920972506 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:18 crc kubenswrapper[4796]: E1202 20:12:18.917911 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:18 crc kubenswrapper[4796]: E1202 20:12:18.917997 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:12:22.917962161 +0000 UTC m=+25.921337705 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:12:18 crc kubenswrapper[4796]: E1202 20:12:18.918106 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:22.918086554 +0000 UTC m=+25.921462098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.979826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.979861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.979871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.979886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:18 crc kubenswrapper[4796]: I1202 20:12:18.979895 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:18Z","lastTransitionTime":"2025-12-02T20:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.018230 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.018292 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:19 crc kubenswrapper[4796]: E1202 20:12:19.018413 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:19 crc kubenswrapper[4796]: E1202 20:12:19.018429 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:19 crc kubenswrapper[4796]: E1202 20:12:19.018439 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:19 crc kubenswrapper[4796]: E1202 20:12:19.018481 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:23.018468064 +0000 UTC m=+26.021843598 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:19 crc kubenswrapper[4796]: E1202 20:12:19.018896 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:19 crc kubenswrapper[4796]: E1202 20:12:19.019031 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:19 crc kubenswrapper[4796]: E1202 20:12:19.019113 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:19 crc kubenswrapper[4796]: E1202 20:12:19.019244 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:23.019223632 +0000 UTC m=+26.022599226 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.082723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.082772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.082784 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.082805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.082820 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:19Z","lastTransitionTime":"2025-12-02T20:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.185101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.185143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.185153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.185167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.185178 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:19Z","lastTransitionTime":"2025-12-02T20:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.264203 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.264285 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.264215 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:19 crc kubenswrapper[4796]: E1202 20:12:19.264371 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:19 crc kubenswrapper[4796]: E1202 20:12:19.264445 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:19 crc kubenswrapper[4796]: E1202 20:12:19.264557 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.286998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.287037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.287049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.287066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.287089 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:19Z","lastTransitionTime":"2025-12-02T20:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.388872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.388918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.388929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.388943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.388954 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:19Z","lastTransitionTime":"2025-12-02T20:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.476733 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mpjq8"] Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.476991 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mpjq8" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.478803 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.478886 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.479499 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.490308 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.490852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.490895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.490907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.490925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.490939 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:19Z","lastTransitionTime":"2025-12-02T20:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.504272 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.517559 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.532879 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.546084 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.556511 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.567225 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.579684 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.592954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.592997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.593008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.593025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.593038 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:19Z","lastTransitionTime":"2025-12-02T20:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.593660 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.623131 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/64bc04e9-c9fc-4a80-98de-59c88457ace6-hosts-file\") pod \"node-resolver-mpjq8\" (UID: \"64bc04e9-c9fc-4a80-98de-59c88457ace6\") " pod="openshift-dns/node-resolver-mpjq8" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.623168 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpzx2\" (UniqueName: \"kubernetes.io/projected/64bc04e9-c9fc-4a80-98de-59c88457ace6-kube-api-access-fpzx2\") pod \"node-resolver-mpjq8\" (UID: \"64bc04e9-c9fc-4a80-98de-59c88457ace6\") " pod="openshift-dns/node-resolver-mpjq8" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.695413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.695440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.695447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.695459 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.695468 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:19Z","lastTransitionTime":"2025-12-02T20:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.724724 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/64bc04e9-c9fc-4a80-98de-59c88457ace6-hosts-file\") pod \"node-resolver-mpjq8\" (UID: \"64bc04e9-c9fc-4a80-98de-59c88457ace6\") " pod="openshift-dns/node-resolver-mpjq8" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.724799 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpzx2\" (UniqueName: \"kubernetes.io/projected/64bc04e9-c9fc-4a80-98de-59c88457ace6-kube-api-access-fpzx2\") pod \"node-resolver-mpjq8\" (UID: \"64bc04e9-c9fc-4a80-98de-59c88457ace6\") " pod="openshift-dns/node-resolver-mpjq8" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.724974 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/64bc04e9-c9fc-4a80-98de-59c88457ace6-hosts-file\") pod \"node-resolver-mpjq8\" (UID: \"64bc04e9-c9fc-4a80-98de-59c88457ace6\") " pod="openshift-dns/node-resolver-mpjq8" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.743846 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpzx2\" (UniqueName: \"kubernetes.io/projected/64bc04e9-c9fc-4a80-98de-59c88457ace6-kube-api-access-fpzx2\") pod \"node-resolver-mpjq8\" (UID: \"64bc04e9-c9fc-4a80-98de-59c88457ace6\") " pod="openshift-dns/node-resolver-mpjq8" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.793683 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mpjq8" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.797558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.797605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.797614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.797635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.797646 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:19Z","lastTransitionTime":"2025-12-02T20:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:19 crc kubenswrapper[4796]: W1202 20:12:19.809176 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64bc04e9_c9fc_4a80_98de_59c88457ace6.slice/crio-dc50d42b74227a1545aa4f2b7f035b6b01abe2f709d5bc8b508b5cacda29b435 WatchSource:0}: Error finding container dc50d42b74227a1545aa4f2b7f035b6b01abe2f709d5bc8b508b5cacda29b435: Status 404 returned error can't find the container with id dc50d42b74227a1545aa4f2b7f035b6b01abe2f709d5bc8b508b5cacda29b435 Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.822230 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.841781 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.843165 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mzw77"] Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.843858 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.844124 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-m672l"] Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.845289 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-m672l" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.846620 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.846862 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.847153 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.847408 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.848777 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.853713 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.853996 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.854171 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.858089 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wzhpq"] Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.858775 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.863236 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.863330 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.863241 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.863758 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.864049 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.880310 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.895324 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.900106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.900143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.900152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.900166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.900177 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:19Z","lastTransitionTime":"2025-12-02T20:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.918601 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.926747 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.943314 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.956966 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.971363 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:19 crc kubenswrapper[4796]: I1202 20:12:19.988447 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.002566 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:19Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.002825 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.002851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.002859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.002871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.002881 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:20Z","lastTransitionTime":"2025-12-02T20:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.020347 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.026915 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03fe6ac0-1095-4336-a25c-4dd0d6e45053-cni-binary-copy\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.026950 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-run-k8s-cni-cncf-io\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.026967 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-var-lib-cni-bin\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.026984 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-cni-binary-copy\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.026999 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5558dc7c-93f9-4212-bf22-fdec743e47ee-mcd-auth-proxy-config\") pod \"machine-config-daemon-wzhpq\" (UID: \"5558dc7c-93f9-4212-bf22-fdec743e47ee\") " pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027016 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kxk6\" (UniqueName: \"kubernetes.io/projected/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-kube-api-access-5kxk6\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027031 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-os-release\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027047 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkp68\" (UniqueName: \"kubernetes.io/projected/5558dc7c-93f9-4212-bf22-fdec743e47ee-kube-api-access-dkp68\") pod \"machine-config-daemon-wzhpq\" (UID: \"5558dc7c-93f9-4212-bf22-fdec743e47ee\") " pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027063 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-etc-kubernetes\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027083 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027099 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-multus-cni-dir\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027123 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-cnibin\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027171 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-multus-socket-dir-parent\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027191 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-var-lib-kubelet\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027206 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-run-netns\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027275 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-multus-conf-dir\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-os-release\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027347 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nnfj\" (UniqueName: \"kubernetes.io/projected/03fe6ac0-1095-4336-a25c-4dd0d6e45053-kube-api-access-8nnfj\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027401 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027423 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-var-lib-cni-multus\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027444 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-run-multus-certs\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027491 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-hostroot\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027523 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-system-cni-dir\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027560 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5558dc7c-93f9-4212-bf22-fdec743e47ee-rootfs\") pod \"machine-config-daemon-wzhpq\" (UID: \"5558dc7c-93f9-4212-bf22-fdec743e47ee\") " pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027603 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-system-cni-dir\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027627 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03fe6ac0-1095-4336-a25c-4dd0d6e45053-multus-daemon-config\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027651 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-cnibin\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.027681 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5558dc7c-93f9-4212-bf22-fdec743e47ee-proxy-tls\") pod \"machine-config-daemon-wzhpq\" (UID: \"5558dc7c-93f9-4212-bf22-fdec743e47ee\") " pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.034215 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.048392 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.062332 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.075567 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.089614 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.105726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.106110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.106120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.106140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.106152 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:20Z","lastTransitionTime":"2025-12-02T20:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.111199 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.122490 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128147 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-run-netns\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128180 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-multus-conf-dir\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128199 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-os-release\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128217 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nnfj\" (UniqueName: \"kubernetes.io/projected/03fe6ac0-1095-4336-a25c-4dd0d6e45053-kube-api-access-8nnfj\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-run-multus-certs\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128274 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-run-netns\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128330 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-multus-conf-dir\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128360 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128438 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-var-lib-cni-multus\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128455 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-run-multus-certs\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128462 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-os-release\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128467 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-hostroot\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128485 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-hostroot\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128496 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-var-lib-cni-multus\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128567 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-system-cni-dir\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128625 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5558dc7c-93f9-4212-bf22-fdec743e47ee-rootfs\") pod \"machine-config-daemon-wzhpq\" (UID: \"5558dc7c-93f9-4212-bf22-fdec743e47ee\") " pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128644 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5558dc7c-93f9-4212-bf22-fdec743e47ee-proxy-tls\") pod \"machine-config-daemon-wzhpq\" (UID: \"5558dc7c-93f9-4212-bf22-fdec743e47ee\") " pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128661 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-system-cni-dir\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128591 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-system-cni-dir\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128682 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03fe6ac0-1095-4336-a25c-4dd0d6e45053-multus-daemon-config\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128843 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-system-cni-dir\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128866 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-cnibin\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128908 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5558dc7c-93f9-4212-bf22-fdec743e47ee-mcd-auth-proxy-config\") pod \"machine-config-daemon-wzhpq\" (UID: \"5558dc7c-93f9-4212-bf22-fdec743e47ee\") " pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128923 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-cnibin\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128940 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03fe6ac0-1095-4336-a25c-4dd0d6e45053-cni-binary-copy\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128966 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-run-k8s-cni-cncf-io\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128997 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-var-lib-cni-bin\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129029 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-cni-binary-copy\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129054 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kxk6\" (UniqueName: \"kubernetes.io/projected/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-kube-api-access-5kxk6\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129085 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-etc-kubernetes\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129108 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-os-release\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129129 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkp68\" (UniqueName: \"kubernetes.io/projected/5558dc7c-93f9-4212-bf22-fdec743e47ee-kube-api-access-dkp68\") pod \"machine-config-daemon-wzhpq\" (UID: \"5558dc7c-93f9-4212-bf22-fdec743e47ee\") " pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129204 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-multus-cni-dir\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129277 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-cnibin\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129311 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-multus-socket-dir-parent\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129335 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-var-lib-kubelet\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129366 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-var-lib-cni-bin\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129401 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03fe6ac0-1095-4336-a25c-4dd0d6e45053-multus-daemon-config\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129575 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03fe6ac0-1095-4336-a25c-4dd0d6e45053-cni-binary-copy\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129531 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5558dc7c-93f9-4212-bf22-fdec743e47ee-mcd-auth-proxy-config\") pod \"machine-config-daemon-wzhpq\" (UID: \"5558dc7c-93f9-4212-bf22-fdec743e47ee\") " pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129658 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-cnibin\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129642 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-var-lib-kubelet\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129677 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-os-release\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129683 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-etc-kubernetes\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.128669 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5558dc7c-93f9-4212-bf22-fdec743e47ee-rootfs\") pod \"machine-config-daemon-wzhpq\" (UID: \"5558dc7c-93f9-4212-bf22-fdec743e47ee\") " pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129713 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-multus-cni-dir\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129730 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-multus-socket-dir-parent\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129825 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03fe6ac0-1095-4336-a25c-4dd0d6e45053-host-run-k8s-cni-cncf-io\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.129844 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.130009 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.130149 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-cni-binary-copy\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.133512 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5558dc7c-93f9-4212-bf22-fdec743e47ee-proxy-tls\") pod \"machine-config-daemon-wzhpq\" (UID: \"5558dc7c-93f9-4212-bf22-fdec743e47ee\") " pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.137129 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.148183 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nnfj\" (UniqueName: \"kubernetes.io/projected/03fe6ac0-1095-4336-a25c-4dd0d6e45053-kube-api-access-8nnfj\") pod \"multus-m672l\" (UID: \"03fe6ac0-1095-4336-a25c-4dd0d6e45053\") " pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.149366 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kxk6\" (UniqueName: \"kubernetes.io/projected/9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb-kube-api-access-5kxk6\") pod \"multus-additional-cni-plugins-mzw77\" (UID: \"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\") " pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.150246 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkp68\" (UniqueName: \"kubernetes.io/projected/5558dc7c-93f9-4212-bf22-fdec743e47ee-kube-api-access-dkp68\") pod \"machine-config-daemon-wzhpq\" (UID: \"5558dc7c-93f9-4212-bf22-fdec743e47ee\") " pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.156846 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.161478 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mzw77" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.171438 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-m672l" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.178306 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.183433 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.213712 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.213749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.213810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.213830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.213841 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:20Z","lastTransitionTime":"2025-12-02T20:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.224963 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.234381 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b286j"] Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.235621 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.242055 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.251046 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.251123 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.251168 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.251047 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.251423 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.251475 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.279457 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.293813 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.303693 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.316278 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.316319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.316329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.316344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.316353 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:20Z","lastTransitionTime":"2025-12-02T20:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.320304 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331460 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-cni-bin\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331560 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovnkube-config\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331591 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-run-ovn-kubernetes\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331610 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331625 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovn-node-metrics-cert\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331643 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-openvswitch\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331657 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-log-socket\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331670 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-cni-netd\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331688 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-etc-openvswitch\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331704 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-ovn\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331807 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-kubelet\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331843 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovnkube-script-lib\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331930 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-slash\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331963 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-systemd\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.331980 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-var-lib-openvswitch\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.332003 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjjqc\" (UniqueName: \"kubernetes.io/projected/87a81d4f-9cb5-40b1-93cf-5691b915a68e-kube-api-access-cjjqc\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.332042 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-run-netns\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.332158 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-env-overrides\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.332194 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-systemd-units\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.332211 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-node-log\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.333125 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.347548 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.360636 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.374771 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.382439 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" event={"ID":"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb","Type":"ContainerStarted","Data":"984ea672c013695c42a445b88dc1a22779d53817ab301e93d9fe83776e6448b4"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.384984 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mpjq8" event={"ID":"64bc04e9-c9fc-4a80-98de-59c88457ace6","Type":"ContainerStarted","Data":"9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.385032 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mpjq8" event={"ID":"64bc04e9-c9fc-4a80-98de-59c88457ace6","Type":"ContainerStarted","Data":"dc50d42b74227a1545aa4f2b7f035b6b01abe2f709d5bc8b508b5cacda29b435"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.387788 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.388821 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerStarted","Data":"0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.388850 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerStarted","Data":"8357f59890d486b0422d09a96ff94b9b2b7f8efd9f2fdca0025779f0a5037a3d"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.391589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m672l" event={"ID":"03fe6ac0-1095-4336-a25c-4dd0d6e45053","Type":"ContainerStarted","Data":"31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.391645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m672l" event={"ID":"03fe6ac0-1095-4336-a25c-4dd0d6e45053","Type":"ContainerStarted","Data":"bba1294397eebecd568254ab17df909357967726e338327e7d16160c8b3d1553"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.400905 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.418783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.418814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.418823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.418844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.418858 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:20Z","lastTransitionTime":"2025-12-02T20:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.420118 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.432905 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433163 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-env-overrides\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433209 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-systemd-units\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433233 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-node-log\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433297 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-cni-bin\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433332 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovnkube-config\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433352 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-node-log\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433369 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-run-ovn-kubernetes\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433374 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-systemd-units\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433402 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433427 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovn-node-metrics-cert\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433447 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-openvswitch\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433454 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-cni-bin\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433465 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-log-socket\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433481 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-cni-netd\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433494 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433499 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-etc-openvswitch\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433521 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-etc-openvswitch\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433554 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-ovn\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433596 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-kubelet\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433621 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovnkube-script-lib\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433651 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-slash\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433671 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-systemd\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433689 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-var-lib-openvswitch\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433708 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjjqc\" (UniqueName: \"kubernetes.io/projected/87a81d4f-9cb5-40b1-93cf-5691b915a68e-kube-api-access-cjjqc\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433731 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-run-netns\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433793 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-run-netns\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433818 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-ovn\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433838 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-kubelet\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433889 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-env-overrides\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433958 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-systemd\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433993 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-slash\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433995 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-openvswitch\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.434022 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-var-lib-openvswitch\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.434045 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-log-socket\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.433409 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-run-ovn-kubernetes\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.434070 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovnkube-config\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.434072 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-cni-netd\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.434440 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovnkube-script-lib\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.437221 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovn-node-metrics-cert\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.448508 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjjqc\" (UniqueName: \"kubernetes.io/projected/87a81d4f-9cb5-40b1-93cf-5691b915a68e-kube-api-access-cjjqc\") pod \"ovnkube-node-b286j\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.450878 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.460375 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.470881 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.484795 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.496178 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.508530 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.520520 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.521610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.521650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.521660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.521678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.521691 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:20Z","lastTransitionTime":"2025-12-02T20:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.533018 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.546609 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.556619 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.557746 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.567860 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.587850 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.601097 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.615066 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.623949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.623973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.623983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.623997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.624008 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:20Z","lastTransitionTime":"2025-12-02T20:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.628559 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.640751 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.658141 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.726713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.727092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.727102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.727116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.727126 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:20Z","lastTransitionTime":"2025-12-02T20:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.829738 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.829772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.829783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.829798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.829809 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:20Z","lastTransitionTime":"2025-12-02T20:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.932204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.932444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.932543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.932641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:20 crc kubenswrapper[4796]: I1202 20:12:20.932728 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:20Z","lastTransitionTime":"2025-12-02T20:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.035508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.035531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.035539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.035551 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.035561 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:21Z","lastTransitionTime":"2025-12-02T20:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.137904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.138542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.138612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.138683 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.138754 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:21Z","lastTransitionTime":"2025-12-02T20:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.242146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.242188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.242197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.242213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.242223 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:21Z","lastTransitionTime":"2025-12-02T20:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.265086 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.265136 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:21 crc kubenswrapper[4796]: E1202 20:12:21.265216 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.265299 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:21 crc kubenswrapper[4796]: E1202 20:12:21.265398 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:21 crc kubenswrapper[4796]: E1202 20:12:21.265697 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.344543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.344575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.344585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.344602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.344612 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:21Z","lastTransitionTime":"2025-12-02T20:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.398430 4796 generic.go:334] "Generic (PLEG): container finished" podID="9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb" containerID="18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74" exitCode=0 Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.398487 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" event={"ID":"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb","Type":"ContainerDied","Data":"18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.400047 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df" exitCode=0 Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.400075 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.400116 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"48cbeb53b893e44b4f08fdd88eece139f2db9f5d740890660a7abc45181d84a0"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.402137 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerStarted","Data":"9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.415300 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.428139 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.441790 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.447578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.447665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.447700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.447718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.447734 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:21Z","lastTransitionTime":"2025-12-02T20:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.455700 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.472580 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.482559 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.492538 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.505571 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.519847 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.533013 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.544821 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.550056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.550079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.550116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.550131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.550139 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:21Z","lastTransitionTime":"2025-12-02T20:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.562377 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.575514 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.588995 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.601806 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.617363 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.634122 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.646175 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.653060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.653098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.653111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.653127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.653138 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:21Z","lastTransitionTime":"2025-12-02T20:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.664982 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.681192 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.692745 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.703530 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.715764 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.725580 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.741474 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.753469 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.754838 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.754901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.754910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.754925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.754934 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:21Z","lastTransitionTime":"2025-12-02T20:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.770505 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.798106 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:21Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.857468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.857504 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.857513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.857527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.857544 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:21Z","lastTransitionTime":"2025-12-02T20:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.959617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.959660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.959674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.959692 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:21 crc kubenswrapper[4796]: I1202 20:12:21.959705 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:21Z","lastTransitionTime":"2025-12-02T20:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.061716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.061755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.061777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.061792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.061804 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:22Z","lastTransitionTime":"2025-12-02T20:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.164057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.164089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.164099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.164115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.164126 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:22Z","lastTransitionTime":"2025-12-02T20:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.266575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.266622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.266632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.266647 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.266658 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:22Z","lastTransitionTime":"2025-12-02T20:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.369336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.369375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.369386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.369402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.369414 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:22Z","lastTransitionTime":"2025-12-02T20:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.407607 4796 generic.go:334] "Generic (PLEG): container finished" podID="9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb" containerID="ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0" exitCode=0 Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.407648 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" event={"ID":"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb","Type":"ContainerDied","Data":"ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.414738 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.414778 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.414788 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.414799 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.414808 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.414817 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.423795 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.438503 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.450500 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.480032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.480091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.480112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.480142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.480174 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:22Z","lastTransitionTime":"2025-12-02T20:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.481702 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.519710 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.543986 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.568074 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.581112 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.583567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.583608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.583618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.583634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.583644 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:22Z","lastTransitionTime":"2025-12-02T20:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.595162 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.607355 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.617298 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.636095 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.647535 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.657698 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.685656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.685690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.685699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.685716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.685726 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:22Z","lastTransitionTime":"2025-12-02T20:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.788535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.788579 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.788589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.788610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.788621 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:22Z","lastTransitionTime":"2025-12-02T20:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.847647 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8p72p"] Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.848053 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8p72p" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.850190 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.851513 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.852167 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.853620 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.865468 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.877054 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.889934 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.891356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.891400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.891413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.891430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.891443 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:22Z","lastTransitionTime":"2025-12-02T20:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.902058 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.912347 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.923621 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.939854 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.949770 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.955692 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.955821 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.955858 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47szf\" (UniqueName: \"kubernetes.io/projected/a7cf3531-4bca-4b5d-9fa6-e70775605e3a-kube-api-access-47szf\") pod \"node-ca-8p72p\" (UID: \"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\") " pod="openshift-image-registry/node-ca-8p72p" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.955880 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a7cf3531-4bca-4b5d-9fa6-e70775605e3a-serviceca\") pod \"node-ca-8p72p\" (UID: \"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\") " pod="openshift-image-registry/node-ca-8p72p" Dec 02 20:12:22 crc kubenswrapper[4796]: E1202 20:12:22.955907 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:12:30.955876209 +0000 UTC m=+33.959251743 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.955959 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.956019 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7cf3531-4bca-4b5d-9fa6-e70775605e3a-host\") pod \"node-ca-8p72p\" (UID: \"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\") " pod="openshift-image-registry/node-ca-8p72p" Dec 02 20:12:22 crc kubenswrapper[4796]: E1202 20:12:22.956078 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:22 crc kubenswrapper[4796]: E1202 20:12:22.956108 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:22 crc kubenswrapper[4796]: E1202 20:12:22.956159 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:30.956151486 +0000 UTC m=+33.959527020 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:22 crc kubenswrapper[4796]: E1202 20:12:22.956175 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:30.956169346 +0000 UTC m=+33.959544880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.959441 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.968667 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.978892 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.991113 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.993511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.993537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.993545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.993557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:22 crc kubenswrapper[4796]: I1202 20:12:22.993566 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:22Z","lastTransitionTime":"2025-12-02T20:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.004475 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.017339 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.033358 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.056682 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47szf\" (UniqueName: \"kubernetes.io/projected/a7cf3531-4bca-4b5d-9fa6-e70775605e3a-kube-api-access-47szf\") pod \"node-ca-8p72p\" (UID: \"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\") " pod="openshift-image-registry/node-ca-8p72p" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.056759 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a7cf3531-4bca-4b5d-9fa6-e70775605e3a-serviceca\") pod \"node-ca-8p72p\" (UID: \"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\") " pod="openshift-image-registry/node-ca-8p72p" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.056792 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.056835 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7cf3531-4bca-4b5d-9fa6-e70775605e3a-host\") pod \"node-ca-8p72p\" (UID: \"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\") " pod="openshift-image-registry/node-ca-8p72p" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.056860 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:23 crc kubenswrapper[4796]: E1202 20:12:23.056969 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:23 crc kubenswrapper[4796]: E1202 20:12:23.056989 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:23 crc kubenswrapper[4796]: E1202 20:12:23.057001 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.057012 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7cf3531-4bca-4b5d-9fa6-e70775605e3a-host\") pod \"node-ca-8p72p\" (UID: \"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\") " pod="openshift-image-registry/node-ca-8p72p" Dec 02 20:12:23 crc kubenswrapper[4796]: E1202 20:12:23.057053 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:31.057035828 +0000 UTC m=+34.060411362 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:23 crc kubenswrapper[4796]: E1202 20:12:23.057081 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:23 crc kubenswrapper[4796]: E1202 20:12:23.057128 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:23 crc kubenswrapper[4796]: E1202 20:12:23.057143 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:23 crc kubenswrapper[4796]: E1202 20:12:23.057223 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:31.057205082 +0000 UTC m=+34.060580626 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.058723 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a7cf3531-4bca-4b5d-9fa6-e70775605e3a-serviceca\") pod \"node-ca-8p72p\" (UID: \"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\") " pod="openshift-image-registry/node-ca-8p72p" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.075796 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47szf\" (UniqueName: \"kubernetes.io/projected/a7cf3531-4bca-4b5d-9fa6-e70775605e3a-kube-api-access-47szf\") pod \"node-ca-8p72p\" (UID: \"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\") " pod="openshift-image-registry/node-ca-8p72p" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.096178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.096212 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.096221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.096236 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.096269 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:23Z","lastTransitionTime":"2025-12-02T20:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.161044 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8p72p" Dec 02 20:12:23 crc kubenswrapper[4796]: W1202 20:12:23.178053 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7cf3531_4bca_4b5d_9fa6_e70775605e3a.slice/crio-d64b51fd34166f840efa8e2820865b0035d750c47cb3a4d1617bf7b7cd71d9a2 WatchSource:0}: Error finding container d64b51fd34166f840efa8e2820865b0035d750c47cb3a4d1617bf7b7cd71d9a2: Status 404 returned error can't find the container with id d64b51fd34166f840efa8e2820865b0035d750c47cb3a4d1617bf7b7cd71d9a2 Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.197780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.197807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.197815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.197827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.197835 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:23Z","lastTransitionTime":"2025-12-02T20:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.264392 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.264438 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:23 crc kubenswrapper[4796]: E1202 20:12:23.264495 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:23 crc kubenswrapper[4796]: E1202 20:12:23.264598 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.264438 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:23 crc kubenswrapper[4796]: E1202 20:12:23.264744 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.299558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.299599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.299609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.299624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.299635 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:23Z","lastTransitionTime":"2025-12-02T20:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.402323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.402364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.402377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.402394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.402407 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:23Z","lastTransitionTime":"2025-12-02T20:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.419494 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8p72p" event={"ID":"a7cf3531-4bca-4b5d-9fa6-e70775605e3a","Type":"ContainerStarted","Data":"b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.419554 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8p72p" event={"ID":"a7cf3531-4bca-4b5d-9fa6-e70775605e3a","Type":"ContainerStarted","Data":"d64b51fd34166f840efa8e2820865b0035d750c47cb3a4d1617bf7b7cd71d9a2"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.422969 4796 generic.go:334] "Generic (PLEG): container finished" podID="9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb" containerID="bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634" exitCode=0 Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.423009 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" event={"ID":"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb","Type":"ContainerDied","Data":"bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.434805 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.447676 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.460897 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.475828 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.487341 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.501805 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.504641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.504665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.504673 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.504684 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.504692 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:23Z","lastTransitionTime":"2025-12-02T20:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.533986 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.545091 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.555889 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.567328 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.578440 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.599578 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.607152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.607187 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.607218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.607239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.607284 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:23Z","lastTransitionTime":"2025-12-02T20:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.608643 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.622427 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.632607 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.641212 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.650191 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.668515 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.680358 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.693615 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.709942 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.710154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.710202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.710219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.710236 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.710266 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:23Z","lastTransitionTime":"2025-12-02T20:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.721291 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.739592 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.749200 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.787618 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.813027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.813050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.813058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.813071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.813080 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:23Z","lastTransitionTime":"2025-12-02T20:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.831495 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.869922 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.909460 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.915724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.915763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.915778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.915800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.915815 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:23Z","lastTransitionTime":"2025-12-02T20:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.948482 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:23 crc kubenswrapper[4796]: I1202 20:12:23.987020 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.017797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.017845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.017860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.017883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.017900 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:24Z","lastTransitionTime":"2025-12-02T20:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.119830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.119861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.119869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.119910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.119920 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:24Z","lastTransitionTime":"2025-12-02T20:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.222690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.222725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.222736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.222749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.222757 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:24Z","lastTransitionTime":"2025-12-02T20:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.325930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.325982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.325999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.326024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.326043 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:24Z","lastTransitionTime":"2025-12-02T20:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.428939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.430101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.430130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.430155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.430173 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:24Z","lastTransitionTime":"2025-12-02T20:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.433467 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c"} Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.437011 4796 generic.go:334] "Generic (PLEG): container finished" podID="9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb" containerID="26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf" exitCode=0 Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.437061 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" event={"ID":"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb","Type":"ContainerDied","Data":"26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf"} Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.456352 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.473572 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.490833 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.513027 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.530165 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.532716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.532793 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.532807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.532825 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.532836 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:24Z","lastTransitionTime":"2025-12-02T20:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.551234 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.569118 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.587987 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.600894 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.615350 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.631380 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.635652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.635690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.635699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.635740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.635751 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:24Z","lastTransitionTime":"2025-12-02T20:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.645193 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.656761 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.675345 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.685491 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.738636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.738673 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.738682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.738697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.738707 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:24Z","lastTransitionTime":"2025-12-02T20:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.841592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.841623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.841631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.841642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.841652 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:24Z","lastTransitionTime":"2025-12-02T20:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.943244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.943319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.943334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.943359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:24 crc kubenswrapper[4796]: I1202 20:12:24.943375 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:24Z","lastTransitionTime":"2025-12-02T20:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.045953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.046004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.046018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.046038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.046052 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:25Z","lastTransitionTime":"2025-12-02T20:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.154041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.154078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.154086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.154102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.154111 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:25Z","lastTransitionTime":"2025-12-02T20:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.261221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.261283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.261294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.261309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.261318 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:25Z","lastTransitionTime":"2025-12-02T20:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.264681 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.264771 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:25 crc kubenswrapper[4796]: E1202 20:12:25.264950 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.264682 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:25 crc kubenswrapper[4796]: E1202 20:12:25.265146 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:25 crc kubenswrapper[4796]: E1202 20:12:25.264785 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.363728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.363794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.363813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.363837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.363855 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:25Z","lastTransitionTime":"2025-12-02T20:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.444328 4796 generic.go:334] "Generic (PLEG): container finished" podID="9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb" containerID="cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6" exitCode=0 Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.444380 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" event={"ID":"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb","Type":"ContainerDied","Data":"cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6"} Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.466711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.466741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.466753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.466771 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.466783 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:25Z","lastTransitionTime":"2025-12-02T20:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.470539 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.489286 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.504699 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.520371 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.544870 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.555750 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.567978 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.568751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.568780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.568790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.568806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.568817 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:25Z","lastTransitionTime":"2025-12-02T20:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.583343 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.596076 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.610476 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.627318 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.650747 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.662700 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.671534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.671582 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.671598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.671619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.671634 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:25Z","lastTransitionTime":"2025-12-02T20:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.677588 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.692784 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:25Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.774336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.774375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.774386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.774400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.774411 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:25Z","lastTransitionTime":"2025-12-02T20:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.877092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.877163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.877182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.877207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.877224 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:25Z","lastTransitionTime":"2025-12-02T20:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.980733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.980773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.980784 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.980801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:25 crc kubenswrapper[4796]: I1202 20:12:25.980816 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:25Z","lastTransitionTime":"2025-12-02T20:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.083548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.083648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.083673 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.083711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.083737 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:26Z","lastTransitionTime":"2025-12-02T20:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.186328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.186694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.186705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.186719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.186728 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:26Z","lastTransitionTime":"2025-12-02T20:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.289130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.289153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.289161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.289173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.289182 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:26Z","lastTransitionTime":"2025-12-02T20:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.397446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.397497 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.397526 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.397551 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.397569 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:26Z","lastTransitionTime":"2025-12-02T20:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.452045 4796 generic.go:334] "Generic (PLEG): container finished" podID="9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb" containerID="15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f" exitCode=0 Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.452243 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" event={"ID":"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb","Type":"ContainerDied","Data":"15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f"} Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.461068 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92"} Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.461864 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.461890 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.474030 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.484377 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.484676 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.498999 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.499896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.499926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.499935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.500068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.500171 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:26Z","lastTransitionTime":"2025-12-02T20:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.510569 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.522989 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.538410 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.552123 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.569587 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.580124 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.593772 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.602980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.603021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.603029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.603048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.603058 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:26Z","lastTransitionTime":"2025-12-02T20:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.609319 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.623692 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.639176 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.650555 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.682904 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.705123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.705162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.705170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.705200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.705209 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:26Z","lastTransitionTime":"2025-12-02T20:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.706288 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.722763 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.734673 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.746492 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.764813 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.782262 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.791127 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.802859 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.807108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.807151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.807164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.807181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.807195 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:26Z","lastTransitionTime":"2025-12-02T20:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.815281 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.827051 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.836891 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.846321 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.855391 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.875204 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.885830 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.896577 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:26Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.909620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.909675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.909689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.909711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:26 crc kubenswrapper[4796]: I1202 20:12:26.909729 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:26Z","lastTransitionTime":"2025-12-02T20:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.012489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.012554 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.012566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.012589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.012605 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:27Z","lastTransitionTime":"2025-12-02T20:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.116423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.116487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.116507 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.116533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.116556 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:27Z","lastTransitionTime":"2025-12-02T20:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.218945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.218988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.218999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.219015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.219029 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:27Z","lastTransitionTime":"2025-12-02T20:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.264843 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.264843 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:27 crc kubenswrapper[4796]: E1202 20:12:27.264954 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.265024 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:27 crc kubenswrapper[4796]: E1202 20:12:27.265060 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:27 crc kubenswrapper[4796]: E1202 20:12:27.265192 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.278376 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.295468 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.313241 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.321272 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.321314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.321325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.321343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.321358 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:27Z","lastTransitionTime":"2025-12-02T20:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.331842 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.349494 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.355223 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.367166 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.381995 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.398810 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.412822 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.423386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.423421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.423432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.423448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.423461 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:27Z","lastTransitionTime":"2025-12-02T20:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.429436 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.441960 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.454763 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.467115 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" event={"ID":"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb","Type":"ContainerStarted","Data":"93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211"} Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.467160 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.478702 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.490346 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.506777 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.519803 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.526023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.526053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.526062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.526076 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.526085 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:27Z","lastTransitionTime":"2025-12-02T20:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.534761 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.556460 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.567212 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.581735 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.597604 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.610785 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.625850 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.628212 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.628245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.628280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.628295 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.628307 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:27Z","lastTransitionTime":"2025-12-02T20:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.636481 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.646450 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.659200 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.671095 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.689519 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.699647 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.709008 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:27Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.730457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.730486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.730495 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.730508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.730517 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:27Z","lastTransitionTime":"2025-12-02T20:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.832287 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.832325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.832334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.832352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.832363 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:27Z","lastTransitionTime":"2025-12-02T20:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.934151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.934191 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.934201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.934216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:27 crc kubenswrapper[4796]: I1202 20:12:27.934226 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:27Z","lastTransitionTime":"2025-12-02T20:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.036165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.036203 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.036216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.036232 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.036241 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.138358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.138393 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.138404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.138418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.138428 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.240863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.240902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.240912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.240926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.240936 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.342778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.342809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.342820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.342835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.342847 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.445090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.445136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.445146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.445163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.445173 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.469027 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.547553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.547596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.547609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.547625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.547638 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.647150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.647174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.647181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.647193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.647203 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: E1202 20:12:28.660643 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:28Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.665154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.665193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.665202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.665216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.665226 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: E1202 20:12:28.675852 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:28Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.680025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.680077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.680087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.680104 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.680116 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: E1202 20:12:28.694290 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:28Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.697541 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.697579 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.697590 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.697605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.697617 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: E1202 20:12:28.710367 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:28Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.714094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.714142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.714155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.714174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.714188 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: E1202 20:12:28.727184 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:28Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:28 crc kubenswrapper[4796]: E1202 20:12:28.727353 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.729680 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.729714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.729727 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.729745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.729758 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.831613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.831650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.831660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.831674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.831683 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.933832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.933904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.933915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.933928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:28 crc kubenswrapper[4796]: I1202 20:12:28.933939 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:28Z","lastTransitionTime":"2025-12-02T20:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.036148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.036192 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.036202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.036217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.036228 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:29Z","lastTransitionTime":"2025-12-02T20:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.138775 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.138841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.138854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.138885 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.138901 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:29Z","lastTransitionTime":"2025-12-02T20:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.241574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.241628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.241640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.241654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.241668 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:29Z","lastTransitionTime":"2025-12-02T20:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.265137 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.265155 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:29 crc kubenswrapper[4796]: E1202 20:12:29.265272 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.265300 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:29 crc kubenswrapper[4796]: E1202 20:12:29.265509 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:29 crc kubenswrapper[4796]: E1202 20:12:29.265657 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.344118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.344266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.344306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.344319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.344329 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:29Z","lastTransitionTime":"2025-12-02T20:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.446750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.446785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.446793 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.446806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.446816 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:29Z","lastTransitionTime":"2025-12-02T20:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.473187 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/0.log" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.475588 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92" exitCode=1 Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.475650 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92"} Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.476271 4796 scope.go:117] "RemoveContainer" containerID="b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.493481 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.511656 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.527239 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.539556 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.548995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.549047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.549061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.549083 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.549096 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:29Z","lastTransitionTime":"2025-12-02T20:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.552495 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.566416 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.590416 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.602314 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.618579 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.640678 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 20:12:28.842374 6079 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:28.842388 6079 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:28.842403 6079 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 20:12:28.842408 6079 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 20:12:28.842420 6079 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:28.842449 6079 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 20:12:28.842468 6079 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 20:12:28.842490 6079 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:28.842486 6079 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:28.842513 6079 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:28.842544 6079 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:28.842593 6079 factory.go:656] Stopping watch factory\\\\nI1202 20:12:28.842614 6079 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:28.842478 6079 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 20:12:28.842847 6079 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.652600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.652704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.652720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.652742 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.652771 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:29Z","lastTransitionTime":"2025-12-02T20:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.653278 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.671997 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.683982 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.697320 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.710168 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:29Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.755682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.755749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.755759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.755776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.755787 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:29Z","lastTransitionTime":"2025-12-02T20:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.857813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.857851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.857859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.857873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.857881 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:29Z","lastTransitionTime":"2025-12-02T20:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.960009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.960052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.960065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.960081 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:29 crc kubenswrapper[4796]: I1202 20:12:29.960091 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:29Z","lastTransitionTime":"2025-12-02T20:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.062241 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.062294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.062306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.062321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.062332 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:30Z","lastTransitionTime":"2025-12-02T20:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.164679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.164728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.164738 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.164755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.164766 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:30Z","lastTransitionTime":"2025-12-02T20:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.266769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.266817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.266828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.266845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.266857 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:30Z","lastTransitionTime":"2025-12-02T20:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.369387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.369431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.369442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.369462 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.369473 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:30Z","lastTransitionTime":"2025-12-02T20:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.471165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.471206 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.471219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.471235 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.471247 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:30Z","lastTransitionTime":"2025-12-02T20:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.480222 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/0.log" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.482660 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e"} Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.482803 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.493763 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.503839 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.516185 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.526662 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.536797 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.554379 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.565626 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.573806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.573841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.573852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.573869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.573884 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:30Z","lastTransitionTime":"2025-12-02T20:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.577117 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.591734 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.608719 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.631127 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 20:12:28.842374 6079 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:28.842388 6079 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:28.842403 6079 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 20:12:28.842408 6079 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 20:12:28.842420 6079 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:28.842449 6079 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 20:12:28.842468 6079 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 20:12:28.842490 6079 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:28.842486 6079 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:28.842513 6079 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:28.842544 6079 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:28.842593 6079 factory.go:656] Stopping watch factory\\\\nI1202 20:12:28.842614 6079 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:28.842478 6079 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 20:12:28.842847 6079 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.643945 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.657349 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.671950 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.675828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.675864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.675878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.675897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.675910 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:30Z","lastTransitionTime":"2025-12-02T20:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.693347 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.779227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.779524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.779631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.779746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.779857 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:30Z","lastTransitionTime":"2025-12-02T20:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.882280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.882310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.882319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.882332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.882341 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:30Z","lastTransitionTime":"2025-12-02T20:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.984879 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.984923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.984932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.984947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:30 crc kubenswrapper[4796]: I1202 20:12:30.984955 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:30Z","lastTransitionTime":"2025-12-02T20:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.033453 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.033573 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.033631 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:12:47.03359383 +0000 UTC m=+50.036969394 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.033650 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.033702 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:47.033688982 +0000 UTC m=+50.037064516 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.033722 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.033800 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.033829 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:47.033822595 +0000 UTC m=+50.037198119 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.087951 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.088001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.088011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.088025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.088035 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:31Z","lastTransitionTime":"2025-12-02T20:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.134753 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.134816 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.134959 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.134965 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.135006 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.135020 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.134976 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.135106 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.135068 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:47.135053106 +0000 UTC m=+50.138428630 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.135192 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:47.135173429 +0000 UTC m=+50.138548963 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.189955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.190031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.190046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.190061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.190072 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:31Z","lastTransitionTime":"2025-12-02T20:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.264843 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.264865 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.265019 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.265088 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.265406 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.265482 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.292404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.292460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.292469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.292483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.292492 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:31Z","lastTransitionTime":"2025-12-02T20:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.394883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.394922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.394932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.394946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.394955 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:31Z","lastTransitionTime":"2025-12-02T20:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.486904 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/1.log" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.487539 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/0.log" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.489713 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e" exitCode=1 Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.489757 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e"} Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.489798 4796 scope.go:117] "RemoveContainer" containerID="b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.490526 4796 scope.go:117] "RemoveContainer" containerID="5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e" Dec 02 20:12:31 crc kubenswrapper[4796]: E1202 20:12:31.490687 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.497684 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.498015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.498024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.498038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.498047 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:31Z","lastTransitionTime":"2025-12-02T20:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.512492 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.523572 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.535502 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.552242 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.571995 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 20:12:28.842374 6079 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:28.842388 6079 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:28.842403 6079 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 20:12:28.842408 6079 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 20:12:28.842420 6079 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:28.842449 6079 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 20:12:28.842468 6079 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 20:12:28.842490 6079 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:28.842486 6079 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:28.842513 6079 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:28.842544 6079 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:28.842593 6079 factory.go:656] Stopping watch factory\\\\nI1202 20:12:28.842614 6079 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:28.842478 6079 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 20:12:28.842847 6079 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:30Z\\\",\\\"message\\\":\\\"etwork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI1202 20:12:30.183758 6213 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-ovn-kubernetes/ovnkube-node-b286j openshift-image-registry/node-ca-8p72p openshift-machine-config-operator/machine-config-daemon-wzhpq openshift-multus/multus-m672l openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-etcd/etcd-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-multus/multus-additional-cni-plugins-mzw77 openshift-network-operator/iptables-alerter-4ln5h]\\\\nI1202 20:12:30.183771 6213 lb_config.go:1031] Cluster endpoints for openshift-dns-operator/metrics for network=default are: map[]\\\\nI1202 20:12:30.183781 6213 services_controller.go:443] Built service openshift-dns-operator/metrics LB cluster-wi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.581516 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.594767 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.600817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.600864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.600885 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.600906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.600922 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:31Z","lastTransitionTime":"2025-12-02T20:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.606047 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.618286 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.631579 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.644924 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.656696 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.670121 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.687550 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.703449 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:31Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.703852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.703884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.703892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.703907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.703916 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:31Z","lastTransitionTime":"2025-12-02T20:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.806835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.806871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.806882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.806900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.806911 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:31Z","lastTransitionTime":"2025-12-02T20:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.909923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.910224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.910331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.910423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.910513 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:31Z","lastTransitionTime":"2025-12-02T20:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.983497 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk"] Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.984268 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.987117 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 02 20:12:31 crc kubenswrapper[4796]: I1202 20:12:31.987162 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.004865 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.014976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.015044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.015064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.015096 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.015119 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:32Z","lastTransitionTime":"2025-12-02T20:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.020040 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.036118 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.043372 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8qs8\" (UniqueName: \"kubernetes.io/projected/d985789d-ae41-4ae1-938a-8f60820a303c-kube-api-access-h8qs8\") pod \"ovnkube-control-plane-749d76644c-r7pwk\" (UID: \"d985789d-ae41-4ae1-938a-8f60820a303c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.043469 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985789d-ae41-4ae1-938a-8f60820a303c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-r7pwk\" (UID: \"d985789d-ae41-4ae1-938a-8f60820a303c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.043530 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985789d-ae41-4ae1-938a-8f60820a303c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-r7pwk\" (UID: \"d985789d-ae41-4ae1-938a-8f60820a303c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.043564 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985789d-ae41-4ae1-938a-8f60820a303c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-r7pwk\" (UID: \"d985789d-ae41-4ae1-938a-8f60820a303c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.052995 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.079941 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.092820 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.109517 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.118243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.118299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.118311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.118330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.118343 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:32Z","lastTransitionTime":"2025-12-02T20:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.125789 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.140821 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.144018 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985789d-ae41-4ae1-938a-8f60820a303c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-r7pwk\" (UID: \"d985789d-ae41-4ae1-938a-8f60820a303c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.144166 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985789d-ae41-4ae1-938a-8f60820a303c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-r7pwk\" (UID: \"d985789d-ae41-4ae1-938a-8f60820a303c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.144222 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985789d-ae41-4ae1-938a-8f60820a303c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-r7pwk\" (UID: \"d985789d-ae41-4ae1-938a-8f60820a303c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.144376 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8qs8\" (UniqueName: \"kubernetes.io/projected/d985789d-ae41-4ae1-938a-8f60820a303c-kube-api-access-h8qs8\") pod \"ovnkube-control-plane-749d76644c-r7pwk\" (UID: \"d985789d-ae41-4ae1-938a-8f60820a303c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.145202 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985789d-ae41-4ae1-938a-8f60820a303c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-r7pwk\" (UID: \"d985789d-ae41-4ae1-938a-8f60820a303c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.145209 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985789d-ae41-4ae1-938a-8f60820a303c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-r7pwk\" (UID: \"d985789d-ae41-4ae1-938a-8f60820a303c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.153762 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985789d-ae41-4ae1-938a-8f60820a303c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-r7pwk\" (UID: \"d985789d-ae41-4ae1-938a-8f60820a303c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.156400 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.169560 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8qs8\" (UniqueName: \"kubernetes.io/projected/d985789d-ae41-4ae1-938a-8f60820a303c-kube-api-access-h8qs8\") pod \"ovnkube-control-plane-749d76644c-r7pwk\" (UID: \"d985789d-ae41-4ae1-938a-8f60820a303c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.175095 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.209233 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 20:12:28.842374 6079 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:28.842388 6079 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:28.842403 6079 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 20:12:28.842408 6079 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 20:12:28.842420 6079 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:28.842449 6079 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 20:12:28.842468 6079 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 20:12:28.842490 6079 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:28.842486 6079 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:28.842513 6079 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:28.842544 6079 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:28.842593 6079 factory.go:656] Stopping watch factory\\\\nI1202 20:12:28.842614 6079 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:28.842478 6079 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 20:12:28.842847 6079 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:30Z\\\",\\\"message\\\":\\\"etwork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI1202 20:12:30.183758 6213 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-ovn-kubernetes/ovnkube-node-b286j openshift-image-registry/node-ca-8p72p openshift-machine-config-operator/machine-config-daemon-wzhpq openshift-multus/multus-m672l openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-etcd/etcd-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-multus/multus-additional-cni-plugins-mzw77 openshift-network-operator/iptables-alerter-4ln5h]\\\\nI1202 20:12:30.183771 6213 lb_config.go:1031] Cluster endpoints for openshift-dns-operator/metrics for network=default are: map[]\\\\nI1202 20:12:30.183781 6213 services_controller.go:443] Built service openshift-dns-operator/metrics LB cluster-wi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.221500 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.221548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.221559 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.221578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.221590 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:32Z","lastTransitionTime":"2025-12-02T20:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.225049 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.242103 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.287065 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.296776 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.310132 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:32Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:32 crc kubenswrapper[4796]: W1202 20:12:32.323982 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd985789d_ae41_4ae1_938a_8f60820a303c.slice/crio-022f556902d2d944676abfdaf966865ac825912047db3f2ec3c0ac20f9f9de21 WatchSource:0}: Error finding container 022f556902d2d944676abfdaf966865ac825912047db3f2ec3c0ac20f9f9de21: Status 404 returned error can't find the container with id 022f556902d2d944676abfdaf966865ac825912047db3f2ec3c0ac20f9f9de21 Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.325519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.325554 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.325567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.325585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.325597 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:32Z","lastTransitionTime":"2025-12-02T20:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.429125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.429173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.429182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.429200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.429214 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:32Z","lastTransitionTime":"2025-12-02T20:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.494189 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" event={"ID":"d985789d-ae41-4ae1-938a-8f60820a303c","Type":"ContainerStarted","Data":"022f556902d2d944676abfdaf966865ac825912047db3f2ec3c0ac20f9f9de21"} Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.496302 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/1.log" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.531664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.531725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.531740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.531764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.531780 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:32Z","lastTransitionTime":"2025-12-02T20:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.634061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.634125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.634143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.634169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.634191 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:32Z","lastTransitionTime":"2025-12-02T20:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.737814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.737856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.737865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.737883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.737905 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:32Z","lastTransitionTime":"2025-12-02T20:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.840627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.840676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.840685 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.840706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.840717 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:32Z","lastTransitionTime":"2025-12-02T20:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.948381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.949202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.949300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.949332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:32 crc kubenswrapper[4796]: I1202 20:12:32.949352 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:32Z","lastTransitionTime":"2025-12-02T20:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.052734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.052817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.052844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.052884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.052909 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:33Z","lastTransitionTime":"2025-12-02T20:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.155503 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.155556 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.155569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.155587 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.155601 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:33Z","lastTransitionTime":"2025-12-02T20:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.258962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.259018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.259036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.259063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.259084 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:33Z","lastTransitionTime":"2025-12-02T20:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.264544 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.264610 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:33 crc kubenswrapper[4796]: E1202 20:12:33.264712 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.264786 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:33 crc kubenswrapper[4796]: E1202 20:12:33.264878 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:33 crc kubenswrapper[4796]: E1202 20:12:33.265161 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.361815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.361890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.361906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.361925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.361937 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:33Z","lastTransitionTime":"2025-12-02T20:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.464551 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.464629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.464646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.464681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.464710 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:33Z","lastTransitionTime":"2025-12-02T20:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.505351 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-g7nb5"] Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.506180 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:33 crc kubenswrapper[4796]: E1202 20:12:33.506350 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.507136 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" event={"ID":"d985789d-ae41-4ae1-938a-8f60820a303c","Type":"ContainerStarted","Data":"735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.507195 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" event={"ID":"d985789d-ae41-4ae1-938a-8f60820a303c","Type":"ContainerStarted","Data":"4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.527009 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.548463 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.558921 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8g97\" (UniqueName: \"kubernetes.io/projected/60c1710d-bf66-4687-8ee7-ea828cde5d53-kube-api-access-s8g97\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.559008 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.564761 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.567317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.567356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.567370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.567399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.567412 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:33Z","lastTransitionTime":"2025-12-02T20:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.584348 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.616288 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 20:12:28.842374 6079 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:28.842388 6079 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:28.842403 6079 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 20:12:28.842408 6079 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 20:12:28.842420 6079 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:28.842449 6079 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 20:12:28.842468 6079 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 20:12:28.842490 6079 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:28.842486 6079 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:28.842513 6079 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:28.842544 6079 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:28.842593 6079 factory.go:656] Stopping watch factory\\\\nI1202 20:12:28.842614 6079 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:28.842478 6079 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 20:12:28.842847 6079 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:30Z\\\",\\\"message\\\":\\\"etwork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI1202 20:12:30.183758 6213 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-ovn-kubernetes/ovnkube-node-b286j openshift-image-registry/node-ca-8p72p openshift-machine-config-operator/machine-config-daemon-wzhpq openshift-multus/multus-m672l openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-etcd/etcd-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-multus/multus-additional-cni-plugins-mzw77 openshift-network-operator/iptables-alerter-4ln5h]\\\\nI1202 20:12:30.183771 6213 lb_config.go:1031] Cluster endpoints for openshift-dns-operator/metrics for network=default are: map[]\\\\nI1202 20:12:30.183781 6213 services_controller.go:443] Built service openshift-dns-operator/metrics LB cluster-wi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.628776 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.641817 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.656034 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.660500 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8g97\" (UniqueName: \"kubernetes.io/projected/60c1710d-bf66-4687-8ee7-ea828cde5d53-kube-api-access-s8g97\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.660601 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:33 crc kubenswrapper[4796]: E1202 20:12:33.660816 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:12:33 crc kubenswrapper[4796]: E1202 20:12:33.660909 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs podName:60c1710d-bf66-4687-8ee7-ea828cde5d53 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:34.160881445 +0000 UTC m=+37.164257019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs") pod "network-metrics-daemon-g7nb5" (UID: "60c1710d-bf66-4687-8ee7-ea828cde5d53") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.671899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.671970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.671982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.672004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.672424 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:33Z","lastTransitionTime":"2025-12-02T20:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.674216 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.681244 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8g97\" (UniqueName: \"kubernetes.io/projected/60c1710d-bf66-4687-8ee7-ea828cde5d53-kube-api-access-s8g97\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.691298 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.704308 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.719134 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.732031 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.746553 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.772772 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.774952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.775007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.775023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.775050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.775068 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:33Z","lastTransitionTime":"2025-12-02T20:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.785334 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.799092 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.820656 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.834502 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.852110 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.866295 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.878340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.878417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.878376 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.878434 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.878611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.878643 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:33Z","lastTransitionTime":"2025-12-02T20:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.893375 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.913120 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.941833 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 20:12:28.842374 6079 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:28.842388 6079 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:28.842403 6079 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 20:12:28.842408 6079 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 20:12:28.842420 6079 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:28.842449 6079 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 20:12:28.842468 6079 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 20:12:28.842490 6079 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:28.842486 6079 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:28.842513 6079 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:28.842544 6079 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:28.842593 6079 factory.go:656] Stopping watch factory\\\\nI1202 20:12:28.842614 6079 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:28.842478 6079 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 20:12:28.842847 6079 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:30Z\\\",\\\"message\\\":\\\"etwork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI1202 20:12:30.183758 6213 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-ovn-kubernetes/ovnkube-node-b286j openshift-image-registry/node-ca-8p72p openshift-machine-config-operator/machine-config-daemon-wzhpq openshift-multus/multus-m672l openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-etcd/etcd-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-multus/multus-additional-cni-plugins-mzw77 openshift-network-operator/iptables-alerter-4ln5h]\\\\nI1202 20:12:30.183771 6213 lb_config.go:1031] Cluster endpoints for openshift-dns-operator/metrics for network=default are: map[]\\\\nI1202 20:12:30.183781 6213 services_controller.go:443] Built service openshift-dns-operator/metrics LB cluster-wi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.955033 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.974225 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.980612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.980666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.980683 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.980704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.980717 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:33Z","lastTransitionTime":"2025-12-02T20:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:33 crc kubenswrapper[4796]: I1202 20:12:33.994607 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:33Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.010357 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:34Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.028459 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:34Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.042015 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:34Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.058489 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:34Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.071921 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:34Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.083755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.083810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.083826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.083845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.083858 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:34Z","lastTransitionTime":"2025-12-02T20:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.086632 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:34Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.165070 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:34 crc kubenswrapper[4796]: E1202 20:12:34.165283 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:12:34 crc kubenswrapper[4796]: E1202 20:12:34.165364 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs podName:60c1710d-bf66-4687-8ee7-ea828cde5d53 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:35.165345395 +0000 UTC m=+38.168720939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs") pod "network-metrics-daemon-g7nb5" (UID: "60c1710d-bf66-4687-8ee7-ea828cde5d53") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.185874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.185906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.185915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.185930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.185942 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:34Z","lastTransitionTime":"2025-12-02T20:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.288458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.288489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.288498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.288510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.288519 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:34Z","lastTransitionTime":"2025-12-02T20:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.391887 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.391957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.391976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.392002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.392023 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:34Z","lastTransitionTime":"2025-12-02T20:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.495188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.495242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.495275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.495300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.495318 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:34Z","lastTransitionTime":"2025-12-02T20:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.597907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.597947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.597956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.597970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.597979 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:34Z","lastTransitionTime":"2025-12-02T20:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.700494 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.700543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.700557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.700576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.700589 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:34Z","lastTransitionTime":"2025-12-02T20:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.802684 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.802723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.802733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.802747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.802755 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:34Z","lastTransitionTime":"2025-12-02T20:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.905283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.905321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.905330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.905343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:34 crc kubenswrapper[4796]: I1202 20:12:34.905352 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:34Z","lastTransitionTime":"2025-12-02T20:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.007394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.007436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.007445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.007459 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.007471 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:35Z","lastTransitionTime":"2025-12-02T20:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.110169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.110279 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.110299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.110318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.110328 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:35Z","lastTransitionTime":"2025-12-02T20:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.175042 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:35 crc kubenswrapper[4796]: E1202 20:12:35.175166 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:12:35 crc kubenswrapper[4796]: E1202 20:12:35.175230 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs podName:60c1710d-bf66-4687-8ee7-ea828cde5d53 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:37.175209659 +0000 UTC m=+40.178585193 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs") pod "network-metrics-daemon-g7nb5" (UID: "60c1710d-bf66-4687-8ee7-ea828cde5d53") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.212729 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.212774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.212786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.212802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.212813 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:35Z","lastTransitionTime":"2025-12-02T20:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.264482 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:35 crc kubenswrapper[4796]: E1202 20:12:35.264663 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.264719 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:35 crc kubenswrapper[4796]: E1202 20:12:35.264960 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.265018 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:35 crc kubenswrapper[4796]: E1202 20:12:35.265072 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.265160 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:35 crc kubenswrapper[4796]: E1202 20:12:35.265206 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.315570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.315612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.315623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.315639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.315648 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:35Z","lastTransitionTime":"2025-12-02T20:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.418405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.418455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.418467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.418488 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.418502 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:35Z","lastTransitionTime":"2025-12-02T20:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.522445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.522507 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.522523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.522549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.522566 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:35Z","lastTransitionTime":"2025-12-02T20:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.626035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.626089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.626102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.626122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.626136 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:35Z","lastTransitionTime":"2025-12-02T20:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.729132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.729194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.729208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.729228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.729244 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:35Z","lastTransitionTime":"2025-12-02T20:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.832370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.832444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.832459 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.832491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.832506 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:35Z","lastTransitionTime":"2025-12-02T20:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.934627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.934705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.934722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.934810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:35 crc kubenswrapper[4796]: I1202 20:12:35.934836 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:35Z","lastTransitionTime":"2025-12-02T20:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.037023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.037068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.037081 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.037096 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.037107 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:36Z","lastTransitionTime":"2025-12-02T20:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.139957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.140032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.140047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.140073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.140091 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:36Z","lastTransitionTime":"2025-12-02T20:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.242484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.242555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.242568 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.242594 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.242609 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:36Z","lastTransitionTime":"2025-12-02T20:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.345769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.345810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.345818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.345832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.345843 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:36Z","lastTransitionTime":"2025-12-02T20:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.448284 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.448363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.448401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.448422 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.448434 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:36Z","lastTransitionTime":"2025-12-02T20:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.551686 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.551757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.551778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.551806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.551824 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:36Z","lastTransitionTime":"2025-12-02T20:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.655522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.655596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.655616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.655641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.655659 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:36Z","lastTransitionTime":"2025-12-02T20:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.758564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.758632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.758650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.758674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.758692 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:36Z","lastTransitionTime":"2025-12-02T20:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.861949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.862046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.862069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.862098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.862126 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:36Z","lastTransitionTime":"2025-12-02T20:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.964852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.964903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.964917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.964940 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:36 crc kubenswrapper[4796]: I1202 20:12:36.964956 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:36Z","lastTransitionTime":"2025-12-02T20:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.066848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.066881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.066890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.066905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.066914 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:37Z","lastTransitionTime":"2025-12-02T20:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.168914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.168957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.168967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.168983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.168995 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:37Z","lastTransitionTime":"2025-12-02T20:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.198891 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:37 crc kubenswrapper[4796]: E1202 20:12:37.199063 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:12:37 crc kubenswrapper[4796]: E1202 20:12:37.199127 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs podName:60c1710d-bf66-4687-8ee7-ea828cde5d53 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:41.19910783 +0000 UTC m=+44.202483374 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs") pod "network-metrics-daemon-g7nb5" (UID: "60c1710d-bf66-4687-8ee7-ea828cde5d53") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.264548 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.264548 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.264686 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:37 crc kubenswrapper[4796]: E1202 20:12:37.264815 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.264833 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:37 crc kubenswrapper[4796]: E1202 20:12:37.264914 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:37 crc kubenswrapper[4796]: E1202 20:12:37.264990 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:37 crc kubenswrapper[4796]: E1202 20:12:37.265067 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.270986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.271083 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.271104 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.271144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.271167 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:37Z","lastTransitionTime":"2025-12-02T20:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.277846 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.298897 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.319486 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.334864 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.352229 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.373419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.373468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.373481 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.373510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.373524 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:37Z","lastTransitionTime":"2025-12-02T20:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.380289 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.392921 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.402477 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.421179 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b398c523488b2b448278d063e640f93da99a6652df77bc9ef3db287d6b9fcf92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 20:12:28.842374 6079 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:28.842388 6079 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:28.842403 6079 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 20:12:28.842408 6079 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 20:12:28.842420 6079 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:28.842449 6079 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 20:12:28.842468 6079 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 20:12:28.842490 6079 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:28.842486 6079 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:28.842513 6079 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:28.842544 6079 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:28.842593 6079 factory.go:656] Stopping watch factory\\\\nI1202 20:12:28.842614 6079 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:28.842478 6079 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 20:12:28.842847 6079 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:30Z\\\",\\\"message\\\":\\\"etwork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI1202 20:12:30.183758 6213 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-ovn-kubernetes/ovnkube-node-b286j openshift-image-registry/node-ca-8p72p openshift-machine-config-operator/machine-config-daemon-wzhpq openshift-multus/multus-m672l openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-etcd/etcd-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-multus/multus-additional-cni-plugins-mzw77 openshift-network-operator/iptables-alerter-4ln5h]\\\\nI1202 20:12:30.183771 6213 lb_config.go:1031] Cluster endpoints for openshift-dns-operator/metrics for network=default are: map[]\\\\nI1202 20:12:30.183781 6213 services_controller.go:443] Built service openshift-dns-operator/metrics LB cluster-wi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.434302 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.448191 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.460510 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.474204 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.475895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.475938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.475949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.475968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.475978 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:37Z","lastTransitionTime":"2025-12-02T20:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.487670 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.498603 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.513418 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.524790 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:37Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.578415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.578452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.578461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.578480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.578489 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:37Z","lastTransitionTime":"2025-12-02T20:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.680456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.680495 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.680506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.680520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.680531 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:37Z","lastTransitionTime":"2025-12-02T20:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.782949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.782994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.783006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.783024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.783036 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:37Z","lastTransitionTime":"2025-12-02T20:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.885353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.885391 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.885403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.885421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.885433 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:37Z","lastTransitionTime":"2025-12-02T20:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.988416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.988457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.988468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.988485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:37 crc kubenswrapper[4796]: I1202 20:12:37.988496 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:37Z","lastTransitionTime":"2025-12-02T20:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.091025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.091099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.091118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.091143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.091161 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.193447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.193513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.193530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.193555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.193573 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.296441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.296509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.296527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.296556 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.296576 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.336025 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.341413 4796 scope.go:117] "RemoveContainer" containerID="5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e" Dec 02 20:12:38 crc kubenswrapper[4796]: E1202 20:12:38.341761 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.356723 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.376319 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.391770 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.399440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.399527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.399580 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.399604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.399663 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.410639 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.430459 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:30Z\\\",\\\"message\\\":\\\"etwork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI1202 20:12:30.183758 6213 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-ovn-kubernetes/ovnkube-node-b286j openshift-image-registry/node-ca-8p72p openshift-machine-config-operator/machine-config-daemon-wzhpq openshift-multus/multus-m672l openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-etcd/etcd-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-multus/multus-additional-cni-plugins-mzw77 openshift-network-operator/iptables-alerter-4ln5h]\\\\nI1202 20:12:30.183771 6213 lb_config.go:1031] Cluster endpoints for openshift-dns-operator/metrics for network=default are: map[]\\\\nI1202 20:12:30.183781 6213 services_controller.go:443] Built service openshift-dns-operator/metrics LB cluster-wi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.443045 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.466414 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.482344 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.501746 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.502886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.503031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.503147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.503246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.503379 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.523200 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.537495 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.549489 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.559918 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.572890 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.582836 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.592629 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.605873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.605944 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.605965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.605993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.606012 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.624127 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.708375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.708479 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.708504 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.708533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.708555 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.811395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.811456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.811467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.811483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.811493 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.914121 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.914159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.914171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.914187 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.914199 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.919582 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.919629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.919645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.919664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.919679 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: E1202 20:12:38.935618 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.939961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.939994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.940002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.940016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.940025 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: E1202 20:12:38.958845 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.962936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.963084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.963171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.963280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.963389 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: E1202 20:12:38.978626 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.982245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.982535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.982693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.982856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:38 crc kubenswrapper[4796]: I1202 20:12:38.983021 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:38Z","lastTransitionTime":"2025-12-02T20:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:38 crc kubenswrapper[4796]: E1202 20:12:38.998328 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:38Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.001778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.002004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.002158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.002358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.002512 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:39Z","lastTransitionTime":"2025-12-02T20:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:39 crc kubenswrapper[4796]: E1202 20:12:39.022659 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:39Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:39 crc kubenswrapper[4796]: E1202 20:12:39.022907 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.024705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.024781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.024797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.024849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.024866 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:39Z","lastTransitionTime":"2025-12-02T20:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.127167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.127208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.127220 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.127237 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.127283 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:39Z","lastTransitionTime":"2025-12-02T20:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.229924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.229971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.229990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.230012 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.230028 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:39Z","lastTransitionTime":"2025-12-02T20:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.264039 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:39 crc kubenswrapper[4796]: E1202 20:12:39.264157 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.264211 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.264049 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.264249 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:39 crc kubenswrapper[4796]: E1202 20:12:39.264330 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:39 crc kubenswrapper[4796]: E1202 20:12:39.264434 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:39 crc kubenswrapper[4796]: E1202 20:12:39.264601 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.332353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.332401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.332412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.332430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.332442 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:39Z","lastTransitionTime":"2025-12-02T20:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.434937 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.434988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.435004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.435025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.435040 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:39Z","lastTransitionTime":"2025-12-02T20:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.537820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.537878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.537895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.537917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.537931 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:39Z","lastTransitionTime":"2025-12-02T20:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.641314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.641373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.641384 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.641402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.641414 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:39Z","lastTransitionTime":"2025-12-02T20:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.743675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.743715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.743725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.743739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.743751 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:39Z","lastTransitionTime":"2025-12-02T20:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.846605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.846665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.846675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.846689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.846701 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:39Z","lastTransitionTime":"2025-12-02T20:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.949512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.949550 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.949558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.949573 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:39 crc kubenswrapper[4796]: I1202 20:12:39.949583 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:39Z","lastTransitionTime":"2025-12-02T20:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.052310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.052350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.052362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.052380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.052391 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:40Z","lastTransitionTime":"2025-12-02T20:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.155189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.155240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.155281 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.155304 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.155319 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:40Z","lastTransitionTime":"2025-12-02T20:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.257523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.257764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.257834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.257930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.258001 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:40Z","lastTransitionTime":"2025-12-02T20:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.360223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.360534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.360601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.360664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.360731 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:40Z","lastTransitionTime":"2025-12-02T20:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.463080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.463122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.463140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.463162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.463178 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:40Z","lastTransitionTime":"2025-12-02T20:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.565831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.565870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.565878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.565894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.565903 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:40Z","lastTransitionTime":"2025-12-02T20:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.668488 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.668543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.668582 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.668612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.668633 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:40Z","lastTransitionTime":"2025-12-02T20:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.771563 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.771619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.771636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.771660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.771677 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:40Z","lastTransitionTime":"2025-12-02T20:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.874731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.874799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.874821 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.874851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.874874 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:40Z","lastTransitionTime":"2025-12-02T20:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.977834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.977892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.977903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.977918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:40 crc kubenswrapper[4796]: I1202 20:12:40.977928 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:40Z","lastTransitionTime":"2025-12-02T20:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.081377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.081438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.081458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.081493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.081512 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:41Z","lastTransitionTime":"2025-12-02T20:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.184635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.184725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.184781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.184812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.184830 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:41Z","lastTransitionTime":"2025-12-02T20:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.240143 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:41 crc kubenswrapper[4796]: E1202 20:12:41.240399 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:12:41 crc kubenswrapper[4796]: E1202 20:12:41.240512 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs podName:60c1710d-bf66-4687-8ee7-ea828cde5d53 nodeName:}" failed. No retries permitted until 2025-12-02 20:12:49.240476071 +0000 UTC m=+52.243851635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs") pod "network-metrics-daemon-g7nb5" (UID: "60c1710d-bf66-4687-8ee7-ea828cde5d53") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.264180 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.264327 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:41 crc kubenswrapper[4796]: E1202 20:12:41.264783 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.264385 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.264353 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:41 crc kubenswrapper[4796]: E1202 20:12:41.264585 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:41 crc kubenswrapper[4796]: E1202 20:12:41.266117 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:41 crc kubenswrapper[4796]: E1202 20:12:41.269116 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.287134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.287178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.287190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.287234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.287287 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:41Z","lastTransitionTime":"2025-12-02T20:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.391093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.391150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.391168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.391192 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.391208 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:41Z","lastTransitionTime":"2025-12-02T20:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.494559 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.494604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.494620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.494642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.494659 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:41Z","lastTransitionTime":"2025-12-02T20:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.598129 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.598200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.598222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.598285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.598311 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:41Z","lastTransitionTime":"2025-12-02T20:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.701355 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.701599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.701671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.701763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.701824 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:41Z","lastTransitionTime":"2025-12-02T20:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.804589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.804670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.804692 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.804717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.804737 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:41Z","lastTransitionTime":"2025-12-02T20:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.907506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.907582 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.907604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.907635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:41 crc kubenswrapper[4796]: I1202 20:12:41.907699 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:41Z","lastTransitionTime":"2025-12-02T20:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.011404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.011480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.011502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.011537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.011561 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:42Z","lastTransitionTime":"2025-12-02T20:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.114440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.114503 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.114523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.114546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.114564 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:42Z","lastTransitionTime":"2025-12-02T20:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.217990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.218056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.218074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.218101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.218124 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:42Z","lastTransitionTime":"2025-12-02T20:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.320452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.320498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.320508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.320524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.320535 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:42Z","lastTransitionTime":"2025-12-02T20:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.424001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.424112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.424140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.424213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.424235 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:42Z","lastTransitionTime":"2025-12-02T20:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.527741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.527799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.527816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.527841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.527864 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:42Z","lastTransitionTime":"2025-12-02T20:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.629959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.630028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.630046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.630073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.630091 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:42Z","lastTransitionTime":"2025-12-02T20:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.733078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.733147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.733167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.733192 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.733212 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:42Z","lastTransitionTime":"2025-12-02T20:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.836423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.836496 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.836515 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.836543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.836562 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:42Z","lastTransitionTime":"2025-12-02T20:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.939516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.939581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.939601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.939626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:42 crc kubenswrapper[4796]: I1202 20:12:42.939643 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:42Z","lastTransitionTime":"2025-12-02T20:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.043180 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.043398 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.043425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.043452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.043470 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:43Z","lastTransitionTime":"2025-12-02T20:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.146433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.146482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.146494 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.146513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.146526 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:43Z","lastTransitionTime":"2025-12-02T20:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.248967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.249018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.249037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.249066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.249088 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:43Z","lastTransitionTime":"2025-12-02T20:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.263999 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.264083 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.264107 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.264168 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:43 crc kubenswrapper[4796]: E1202 20:12:43.264669 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:43 crc kubenswrapper[4796]: E1202 20:12:43.264794 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:43 crc kubenswrapper[4796]: E1202 20:12:43.264832 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:43 crc kubenswrapper[4796]: E1202 20:12:43.264947 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.352044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.352118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.352135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.352176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.352188 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:43Z","lastTransitionTime":"2025-12-02T20:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.455500 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.455591 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.455614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.456068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.456419 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:43Z","lastTransitionTime":"2025-12-02T20:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.560522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.560586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.560603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.560630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.560648 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:43Z","lastTransitionTime":"2025-12-02T20:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.663133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.663200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.663217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.663437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.663456 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:43Z","lastTransitionTime":"2025-12-02T20:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.766033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.766089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.766105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.766128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.766145 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:43Z","lastTransitionTime":"2025-12-02T20:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.869640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.869685 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.869697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.869715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.869727 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:43Z","lastTransitionTime":"2025-12-02T20:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.972866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.972914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.972924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.972940 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:43 crc kubenswrapper[4796]: I1202 20:12:43.972950 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:43Z","lastTransitionTime":"2025-12-02T20:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.075197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.075277 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.075292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.075311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.075323 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:44Z","lastTransitionTime":"2025-12-02T20:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.178133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.178196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.178214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.178239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.178311 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:44Z","lastTransitionTime":"2025-12-02T20:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.281354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.281451 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.281472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.281497 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.281516 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:44Z","lastTransitionTime":"2025-12-02T20:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.384023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.384117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.384152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.384181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.384203 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:44Z","lastTransitionTime":"2025-12-02T20:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.487310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.487357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.487374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.487389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.487400 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:44Z","lastTransitionTime":"2025-12-02T20:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.590466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.590526 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.590543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.590569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.590586 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:44Z","lastTransitionTime":"2025-12-02T20:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.693878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.693938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.693954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.693975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.693993 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:44Z","lastTransitionTime":"2025-12-02T20:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.796798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.796876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.796895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.796951 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.796969 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:44Z","lastTransitionTime":"2025-12-02T20:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.900720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.900806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.900844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.900877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:44 crc kubenswrapper[4796]: I1202 20:12:44.900906 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:44Z","lastTransitionTime":"2025-12-02T20:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.003589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.003673 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.003689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.003711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.004384 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:45Z","lastTransitionTime":"2025-12-02T20:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.107518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.107671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.107692 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.107762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.107782 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:45Z","lastTransitionTime":"2025-12-02T20:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.211285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.211557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.211689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.211805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.211947 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:45Z","lastTransitionTime":"2025-12-02T20:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.264353 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.264410 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.264477 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:45 crc kubenswrapper[4796]: E1202 20:12:45.264566 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.264857 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:45 crc kubenswrapper[4796]: E1202 20:12:45.265037 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:45 crc kubenswrapper[4796]: E1202 20:12:45.265120 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:45 crc kubenswrapper[4796]: E1202 20:12:45.265006 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.314640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.314699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.314716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.314740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.314758 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:45Z","lastTransitionTime":"2025-12-02T20:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.417231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.417529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.417623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.417720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.417818 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:45Z","lastTransitionTime":"2025-12-02T20:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.520353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.520416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.520428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.520445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.520457 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:45Z","lastTransitionTime":"2025-12-02T20:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.623430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.623499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.623524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.623554 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.623576 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:45Z","lastTransitionTime":"2025-12-02T20:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.726663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.726709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.726719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.726734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.726744 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:45Z","lastTransitionTime":"2025-12-02T20:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.829651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.829689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.829697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.829711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.829720 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:45Z","lastTransitionTime":"2025-12-02T20:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.933002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.933068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.933088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.933121 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:45 crc kubenswrapper[4796]: I1202 20:12:45.933141 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:45Z","lastTransitionTime":"2025-12-02T20:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.035917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.036046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.036072 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.036106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.036130 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:46Z","lastTransitionTime":"2025-12-02T20:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.139442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.139519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.139542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.139566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.139583 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:46Z","lastTransitionTime":"2025-12-02T20:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.243203 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.243312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.243333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.243363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.243382 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:46Z","lastTransitionTime":"2025-12-02T20:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.347655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.347716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.347734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.347759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.347778 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:46Z","lastTransitionTime":"2025-12-02T20:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.452160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.452292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.452324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.452364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.452390 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:46Z","lastTransitionTime":"2025-12-02T20:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.555380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.555462 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.555483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.555510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.555530 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:46Z","lastTransitionTime":"2025-12-02T20:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.658787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.658870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.658894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.658929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.658952 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:46Z","lastTransitionTime":"2025-12-02T20:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.762903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.762996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.763016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.763053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.763075 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:46Z","lastTransitionTime":"2025-12-02T20:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.865393 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.865463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.865488 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.865518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.865541 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:46Z","lastTransitionTime":"2025-12-02T20:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.969232 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.969387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.969414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.969461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:46 crc kubenswrapper[4796]: I1202 20:12:46.969490 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:46Z","lastTransitionTime":"2025-12-02T20:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.073435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.073525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.073540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.073560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.073575 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:47Z","lastTransitionTime":"2025-12-02T20:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.108333 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.108562 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:13:19.108520051 +0000 UTC m=+82.111895615 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.108817 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.108954 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.109057 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.109104 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.109180 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:13:19.109149997 +0000 UTC m=+82.112525571 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.109218 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:13:19.109201678 +0000 UTC m=+82.112577242 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.176655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.176711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.176723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.176742 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.176757 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:47Z","lastTransitionTime":"2025-12-02T20:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.210248 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.210402 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.210607 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.210657 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.210662 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.210680 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.210694 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.210719 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.210784 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 20:13:19.210750556 +0000 UTC m=+82.214126120 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.210822 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 20:13:19.210807087 +0000 UTC m=+82.214182661 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.264892 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.264888 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.265043 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.265087 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.265324 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.265537 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.265698 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:47 crc kubenswrapper[4796]: E1202 20:12:47.265996 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.279981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.280024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.280033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.280050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.280060 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:47Z","lastTransitionTime":"2025-12-02T20:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.290446 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.310595 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.355225 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.375474 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.382947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.383004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.383026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.383051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.383069 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:47Z","lastTransitionTime":"2025-12-02T20:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.394503 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.413102 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.434687 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.458972 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:30Z\\\",\\\"message\\\":\\\"etwork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI1202 20:12:30.183758 6213 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-ovn-kubernetes/ovnkube-node-b286j openshift-image-registry/node-ca-8p72p openshift-machine-config-operator/machine-config-daemon-wzhpq openshift-multus/multus-m672l openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-etcd/etcd-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-multus/multus-additional-cni-plugins-mzw77 openshift-network-operator/iptables-alerter-4ln5h]\\\\nI1202 20:12:30.183771 6213 lb_config.go:1031] Cluster endpoints for openshift-dns-operator/metrics for network=default are: map[]\\\\nI1202 20:12:30.183781 6213 services_controller.go:443] Built service openshift-dns-operator/metrics LB cluster-wi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.471166 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.486841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.486906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.486924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.486958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.486976 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:47Z","lastTransitionTime":"2025-12-02T20:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.495319 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.514111 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.530381 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.543078 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.563507 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.586277 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.592186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.592279 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.592309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.592345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.592366 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:47Z","lastTransitionTime":"2025-12-02T20:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.603066 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.629072 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:47Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.695013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.695091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.695110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.695143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.695164 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:47Z","lastTransitionTime":"2025-12-02T20:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.799196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.799283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.799298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.799317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.799328 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:47Z","lastTransitionTime":"2025-12-02T20:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.907195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.907328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.907392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.907431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:47 crc kubenswrapper[4796]: I1202 20:12:47.907474 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:47Z","lastTransitionTime":"2025-12-02T20:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.011041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.011115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.011136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.011167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.011189 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:48Z","lastTransitionTime":"2025-12-02T20:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.115087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.115188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.115208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.115242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.115508 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:48Z","lastTransitionTime":"2025-12-02T20:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.219649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.219732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.219751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.219780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.219872 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:48Z","lastTransitionTime":"2025-12-02T20:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.324090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.324171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.324189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.324219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.324242 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:48Z","lastTransitionTime":"2025-12-02T20:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.372050 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.390402 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.400736 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.425553 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.428053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.428119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.428139 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.428164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.428184 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:48Z","lastTransitionTime":"2025-12-02T20:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.446742 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.462583 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.477706 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.509556 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.527567 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.531155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.531195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.531234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.531285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.531299 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:48Z","lastTransitionTime":"2025-12-02T20:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.547224 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.565280 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.582648 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.596438 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.615321 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.635744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.635789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.635802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.635832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.635850 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:48Z","lastTransitionTime":"2025-12-02T20:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.644225 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:30Z\\\",\\\"message\\\":\\\"etwork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI1202 20:12:30.183758 6213 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-ovn-kubernetes/ovnkube-node-b286j openshift-image-registry/node-ca-8p72p openshift-machine-config-operator/machine-config-daemon-wzhpq openshift-multus/multus-m672l openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-etcd/etcd-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-multus/multus-additional-cni-plugins-mzw77 openshift-network-operator/iptables-alerter-4ln5h]\\\\nI1202 20:12:30.183771 6213 lb_config.go:1031] Cluster endpoints for openshift-dns-operator/metrics for network=default are: map[]\\\\nI1202 20:12:30.183781 6213 services_controller.go:443] Built service openshift-dns-operator/metrics LB cluster-wi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.658464 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.678740 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.704227 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.718296 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:48Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.738778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.738817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.738828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.738849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.738862 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:48Z","lastTransitionTime":"2025-12-02T20:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.841955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.842005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.842017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.842038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.842050 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:48Z","lastTransitionTime":"2025-12-02T20:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.945910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.946004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.946023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.946049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:48 crc kubenswrapper[4796]: I1202 20:12:48.946062 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:48Z","lastTransitionTime":"2025-12-02T20:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.049313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.049383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.049400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.049426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.049448 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.153086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.153168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.153194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.153232 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.153329 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.225486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.225580 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.225595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.225619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.225648 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: E1202 20:12:49.248084 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:49Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.255652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.255728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.255744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.255771 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.255788 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.264521 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.264619 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.264535 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:49 crc kubenswrapper[4796]: E1202 20:12:49.264812 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.264894 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:49 crc kubenswrapper[4796]: E1202 20:12:49.265054 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:49 crc kubenswrapper[4796]: E1202 20:12:49.265315 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:49 crc kubenswrapper[4796]: E1202 20:12:49.265442 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:49 crc kubenswrapper[4796]: E1202 20:12:49.279666 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:49Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.285637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.285700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.285724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.285749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.285770 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: E1202 20:12:49.300986 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:49Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.307097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.307168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.307194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.307222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.307244 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: E1202 20:12:49.331140 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:49Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.334668 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:49 crc kubenswrapper[4796]: E1202 20:12:49.334922 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:12:49 crc kubenswrapper[4796]: E1202 20:12:49.335075 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs podName:60c1710d-bf66-4687-8ee7-ea828cde5d53 nodeName:}" failed. No retries permitted until 2025-12-02 20:13:05.335045656 +0000 UTC m=+68.338421200 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs") pod "network-metrics-daemon-g7nb5" (UID: "60c1710d-bf66-4687-8ee7-ea828cde5d53") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.338233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.338309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.338330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.338353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.338367 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: E1202 20:12:49.364080 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:49Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:49 crc kubenswrapper[4796]: E1202 20:12:49.364435 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.366706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.366755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.366768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.366812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.366831 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.470331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.470394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.470413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.470438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.470459 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.574077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.574150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.574173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.574204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.574226 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.680483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.680567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.680606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.680642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.680663 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.785698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.785798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.785820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.785854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.785877 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.889996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.890077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.890097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.890127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.890146 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.994312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.994382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.994405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.994436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:49 crc kubenswrapper[4796]: I1202 20:12:49.994458 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:49Z","lastTransitionTime":"2025-12-02T20:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.097933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.098026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.098049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.098078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.098097 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:50Z","lastTransitionTime":"2025-12-02T20:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.201248 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.201623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.201711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.201816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.202007 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:50Z","lastTransitionTime":"2025-12-02T20:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.305698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.305768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.305788 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.305816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.305838 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:50Z","lastTransitionTime":"2025-12-02T20:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.409020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.409101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.409140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.409178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.409205 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:50Z","lastTransitionTime":"2025-12-02T20:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.512869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.512919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.512935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.512958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.512974 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:50Z","lastTransitionTime":"2025-12-02T20:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.616192 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.616306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.616329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.616366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.616395 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:50Z","lastTransitionTime":"2025-12-02T20:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.719610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.719678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.719697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.719728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.719749 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:50Z","lastTransitionTime":"2025-12-02T20:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.823377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.823482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.823501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.823529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.823550 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:50Z","lastTransitionTime":"2025-12-02T20:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.927851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.927943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.927967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.928002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:50 crc kubenswrapper[4796]: I1202 20:12:50.928027 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:50Z","lastTransitionTime":"2025-12-02T20:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.031946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.032023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.032040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.032072 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.032093 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:51Z","lastTransitionTime":"2025-12-02T20:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.135302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.135385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.135403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.135432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.135452 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:51Z","lastTransitionTime":"2025-12-02T20:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.238977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.239040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.239055 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.239079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.239097 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:51Z","lastTransitionTime":"2025-12-02T20:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.264844 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.264878 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.264858 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:51 crc kubenswrapper[4796]: E1202 20:12:51.265018 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.264979 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:51 crc kubenswrapper[4796]: E1202 20:12:51.265135 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:51 crc kubenswrapper[4796]: E1202 20:12:51.265397 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:51 crc kubenswrapper[4796]: E1202 20:12:51.265501 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.342548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.342637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.342662 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.342698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.342723 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:51Z","lastTransitionTime":"2025-12-02T20:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.446382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.446458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.446478 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.446507 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.446526 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:51Z","lastTransitionTime":"2025-12-02T20:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.549579 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.549639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.549649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.549668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.549682 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:51Z","lastTransitionTime":"2025-12-02T20:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.652644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.652709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.652728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.652757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.652783 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:51Z","lastTransitionTime":"2025-12-02T20:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.755721 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.755796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.755819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.755850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.755873 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:51Z","lastTransitionTime":"2025-12-02T20:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.859659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.859729 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.859742 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.859764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.859780 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:51Z","lastTransitionTime":"2025-12-02T20:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.964601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.964677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.964700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.964729 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:51 crc kubenswrapper[4796]: I1202 20:12:51.964749 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:51Z","lastTransitionTime":"2025-12-02T20:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.067981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.068806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.068894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.069006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.069106 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:52Z","lastTransitionTime":"2025-12-02T20:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.172425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.172476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.172488 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.172505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.172518 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:52Z","lastTransitionTime":"2025-12-02T20:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.275746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.275805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.275817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.275838 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.275852 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:52Z","lastTransitionTime":"2025-12-02T20:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.380412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.380469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.380483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.380507 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.380524 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:52Z","lastTransitionTime":"2025-12-02T20:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.484123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.484482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.484530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.484579 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.484609 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:52Z","lastTransitionTime":"2025-12-02T20:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.587968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.588052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.588104 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.588126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.588141 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:52Z","lastTransitionTime":"2025-12-02T20:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.692432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.692513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.692531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.692561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.692582 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:52Z","lastTransitionTime":"2025-12-02T20:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.796719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.796838 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.796859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.796888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.797005 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:52Z","lastTransitionTime":"2025-12-02T20:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.900383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.900468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.900486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.900517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:52 crc kubenswrapper[4796]: I1202 20:12:52.900537 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:52Z","lastTransitionTime":"2025-12-02T20:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.003564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.003610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.003622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.003639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.003651 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:53Z","lastTransitionTime":"2025-12-02T20:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.107925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.108003 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.108020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.108046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.108062 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:53Z","lastTransitionTime":"2025-12-02T20:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.211958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.212019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.212033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.212056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.212070 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:53Z","lastTransitionTime":"2025-12-02T20:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.264760 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.264780 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.264797 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:53 crc kubenswrapper[4796]: E1202 20:12:53.265061 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.264807 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:53 crc kubenswrapper[4796]: E1202 20:12:53.265209 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:53 crc kubenswrapper[4796]: E1202 20:12:53.265394 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:53 crc kubenswrapper[4796]: E1202 20:12:53.265596 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.266494 4796 scope.go:117] "RemoveContainer" containerID="5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.316499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.316985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.317004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.317568 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.317621 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:53Z","lastTransitionTime":"2025-12-02T20:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.421399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.421454 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.421473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.421497 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.421514 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:53Z","lastTransitionTime":"2025-12-02T20:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.524143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.524199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.524216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.524242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.524290 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:53Z","lastTransitionTime":"2025-12-02T20:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.584742 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/1.log" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.589776 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5"} Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.590742 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.613596 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.627292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.627347 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.627368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.627396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.627415 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:53Z","lastTransitionTime":"2025-12-02T20:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.661722 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.683540 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.712351 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.725972 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.729894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.729942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.729954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.729974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.729986 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:53Z","lastTransitionTime":"2025-12-02T20:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.741275 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.754740 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.772238 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69e27721-7359-4707-aaaa-181eb24401e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6d808775c86045c4d0142fbb6f4e015251902b10a37c70bd2804c260c72b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d502f34530d387d0dca7b89c9b7cd1fea6e06b6ddf51ab1c58daefd8265e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8793493573dfb14c86ac89098d6872b1833e33f246947bedd5e5308a3d656f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.785084 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.807555 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.821073 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.833123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.833173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.833186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.833208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.833223 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:53Z","lastTransitionTime":"2025-12-02T20:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.840245 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.857540 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.877108 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:30Z\\\",\\\"message\\\":\\\"etwork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI1202 20:12:30.183758 6213 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-ovn-kubernetes/ovnkube-node-b286j openshift-image-registry/node-ca-8p72p openshift-machine-config-operator/machine-config-daemon-wzhpq openshift-multus/multus-m672l openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-etcd/etcd-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-multus/multus-additional-cni-plugins-mzw77 openshift-network-operator/iptables-alerter-4ln5h]\\\\nI1202 20:12:30.183771 6213 lb_config.go:1031] Cluster endpoints for openshift-dns-operator/metrics for network=default are: map[]\\\\nI1202 20:12:30.183781 6213 services_controller.go:443] Built service openshift-dns-operator/metrics LB cluster-wi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.887751 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.900602 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.911316 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.923497 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:53Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.935780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.935831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.935841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.935858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:53 crc kubenswrapper[4796]: I1202 20:12:53.935870 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:53Z","lastTransitionTime":"2025-12-02T20:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.038480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.038516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.038531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.038548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.038559 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:54Z","lastTransitionTime":"2025-12-02T20:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.141694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.141734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.141743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.141759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.141771 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:54Z","lastTransitionTime":"2025-12-02T20:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.243993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.244079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.244106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.244149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.244178 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:54Z","lastTransitionTime":"2025-12-02T20:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.347843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.347906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.347925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.347953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.347976 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:54Z","lastTransitionTime":"2025-12-02T20:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.451769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.451831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.451851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.451882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.451900 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:54Z","lastTransitionTime":"2025-12-02T20:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.554467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.554515 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.554527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.554545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.554557 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:54Z","lastTransitionTime":"2025-12-02T20:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.595014 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/2.log" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.596183 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/1.log" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.599705 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5" exitCode=1 Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.599775 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5"} Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.599850 4796 scope.go:117] "RemoveContainer" containerID="5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.600716 4796 scope.go:117] "RemoveContainer" containerID="cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5" Dec 02 20:12:54 crc kubenswrapper[4796]: E1202 20:12:54.600970 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.616363 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.635414 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.657299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.657373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.657396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.657429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.657453 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:54Z","lastTransitionTime":"2025-12-02T20:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.668277 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.690942 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.711778 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.732488 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.754022 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.760697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.760764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.760778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.760802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.760817 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:54Z","lastTransitionTime":"2025-12-02T20:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.786917 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2c4c421ad67664f3966e70b69486e8bb132090a43d539283cccbf4f0940b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:30Z\\\",\\\"message\\\":\\\"etwork controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI1202 20:12:30.183758 6213 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-ovn-kubernetes/ovnkube-node-b286j openshift-image-registry/node-ca-8p72p openshift-machine-config-operator/machine-config-daemon-wzhpq openshift-multus/multus-m672l openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-etcd/etcd-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-multus/multus-additional-cni-plugins-mzw77 openshift-network-operator/iptables-alerter-4ln5h]\\\\nI1202 20:12:30.183771 6213 lb_config.go:1031] Cluster endpoints for openshift-dns-operator/metrics for network=default are: map[]\\\\nI1202 20:12:30.183781 6213 services_controller.go:443] Built service openshift-dns-operator/metrics LB cluster-wi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:54Z\\\",\\\"message\\\":\\\"\\\\nI1202 20:12:54.336774 6491 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 20:12:54.336789 6491 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 20:12:54.336829 6491 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 20:12:54.340526 6491 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:54.340554 6491 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:54.340564 6491 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:54.340596 6491 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 20:12:54.340619 6491 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:54.340646 6491 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:54.340686 6491 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:54.340735 6491 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:54.340791 6491 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:54.340821 6491 factory.go:656] Stopping watch factory\\\\nI1202 20:12:54.340835 6491 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 20:12:54.340843 6491 ovnkube.go:599] Stopped ovnkube\\\\nI1202 20:12:54.340794 6491 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.806055 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.835879 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.851154 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.863598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.863675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.863696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.863726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.863748 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:54Z","lastTransitionTime":"2025-12-02T20:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.869182 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.890120 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.905916 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69e27721-7359-4707-aaaa-181eb24401e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6d808775c86045c4d0142fbb6f4e015251902b10a37c70bd2804c260c72b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d502f34530d387d0dca7b89c9b7cd1fea6e06b6ddf51ab1c58daefd8265e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8793493573dfb14c86ac89098d6872b1833e33f246947bedd5e5308a3d656f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.920920 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.934682 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.953467 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.966484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.966523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.966532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.966547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.966556 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:54Z","lastTransitionTime":"2025-12-02T20:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:54 crc kubenswrapper[4796]: I1202 20:12:54.969604 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:54Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.068395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.068456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.068475 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.068498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.068515 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:55Z","lastTransitionTime":"2025-12-02T20:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.170436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.170469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.170479 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.170493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.170504 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:55Z","lastTransitionTime":"2025-12-02T20:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.264543 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.264552 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.264653 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.264729 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:55 crc kubenswrapper[4796]: E1202 20:12:55.264969 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:55 crc kubenswrapper[4796]: E1202 20:12:55.265058 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:55 crc kubenswrapper[4796]: E1202 20:12:55.265241 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:55 crc kubenswrapper[4796]: E1202 20:12:55.265613 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.272842 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.272890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.272906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.272926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.272942 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:55Z","lastTransitionTime":"2025-12-02T20:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.376514 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.376551 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.376560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.376579 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.376590 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:55Z","lastTransitionTime":"2025-12-02T20:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.480442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.480509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.480527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.480552 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.480571 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:55Z","lastTransitionTime":"2025-12-02T20:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.583180 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.583306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.583331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.583368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.583396 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:55Z","lastTransitionTime":"2025-12-02T20:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.607236 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/2.log" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.613466 4796 scope.go:117] "RemoveContainer" containerID="cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5" Dec 02 20:12:55 crc kubenswrapper[4796]: E1202 20:12:55.613634 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.634212 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.654966 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.677167 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69e27721-7359-4707-aaaa-181eb24401e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6d808775c86045c4d0142fbb6f4e015251902b10a37c70bd2804c260c72b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d502f34530d387d0dca7b89c9b7cd1fea6e06b6ddf51ab1c58daefd8265e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8793493573dfb14c86ac89098d6872b1833e33f246947bedd5e5308a3d656f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.686226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.686277 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.686286 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.686299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.686309 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:55Z","lastTransitionTime":"2025-12-02T20:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.695006 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.710730 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.725315 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.748896 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.770425 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.785933 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.789876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.789963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.789990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.790026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.790050 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:55Z","lastTransitionTime":"2025-12-02T20:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.802213 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.817081 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.831607 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.850706 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.887055 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:54Z\\\",\\\"message\\\":\\\"\\\\nI1202 20:12:54.336774 6491 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 20:12:54.336789 6491 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 20:12:54.336829 6491 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 20:12:54.340526 6491 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:54.340554 6491 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:54.340564 6491 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:54.340596 6491 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 20:12:54.340619 6491 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:54.340646 6491 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:54.340686 6491 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:54.340735 6491 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:54.340791 6491 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:54.340821 6491 factory.go:656] Stopping watch factory\\\\nI1202 20:12:54.340835 6491 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 20:12:54.340843 6491 ovnkube.go:599] Stopped ovnkube\\\\nI1202 20:12:54.340794 6491 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.892943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.892996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.893014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.893040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.893059 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:55Z","lastTransitionTime":"2025-12-02T20:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.900856 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.918023 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.941074 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.960033 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:55Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.996535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.996592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.996612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.996639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:55 crc kubenswrapper[4796]: I1202 20:12:55.996658 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:55Z","lastTransitionTime":"2025-12-02T20:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.100027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.100088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.100105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.100125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.100138 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:56Z","lastTransitionTime":"2025-12-02T20:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.203794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.203848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.203864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.203887 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.203905 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:56Z","lastTransitionTime":"2025-12-02T20:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.307439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.307524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.307540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.307565 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.307581 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:56Z","lastTransitionTime":"2025-12-02T20:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.411086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.411173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.411200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.411236 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.411310 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:56Z","lastTransitionTime":"2025-12-02T20:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.514539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.514586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.514598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.514619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.514633 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:56Z","lastTransitionTime":"2025-12-02T20:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.617310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.617378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.617388 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.617404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.617418 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:56Z","lastTransitionTime":"2025-12-02T20:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.720105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.720203 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.720223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.720296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.720318 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:56Z","lastTransitionTime":"2025-12-02T20:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.824490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.824581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.824604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.824640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.824664 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:56Z","lastTransitionTime":"2025-12-02T20:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.928994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.929069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.929086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.929120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:56 crc kubenswrapper[4796]: I1202 20:12:56.929141 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:56Z","lastTransitionTime":"2025-12-02T20:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.032224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.032331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.032352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.032383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.032406 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:57Z","lastTransitionTime":"2025-12-02T20:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.135659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.135749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.135775 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.135816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.135845 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:57Z","lastTransitionTime":"2025-12-02T20:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.238969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.239048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.239068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.239098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.239118 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:57Z","lastTransitionTime":"2025-12-02T20:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.264587 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.264624 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.264587 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:57 crc kubenswrapper[4796]: E1202 20:12:57.264744 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.264763 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:57 crc kubenswrapper[4796]: E1202 20:12:57.264898 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:57 crc kubenswrapper[4796]: E1202 20:12:57.264976 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:57 crc kubenswrapper[4796]: E1202 20:12:57.265232 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.281758 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.309208 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.328245 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.341306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.341372 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.341397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.341430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.341460 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:57Z","lastTransitionTime":"2025-12-02T20:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.345969 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.370475 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.383768 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.401989 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.424336 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.447026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.447099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.447115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.447144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.447163 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:57Z","lastTransitionTime":"2025-12-02T20:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.447822 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69e27721-7359-4707-aaaa-181eb24401e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6d808775c86045c4d0142fbb6f4e015251902b10a37c70bd2804c260c72b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d502f34530d387d0dca7b89c9b7cd1fea6e06b6ddf51ab1c58daefd8265e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8793493573dfb14c86ac89098d6872b1833e33f246947bedd5e5308a3d656f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.476676 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.488832 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.500551 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.512682 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.527320 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.549008 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:54Z\\\",\\\"message\\\":\\\"\\\\nI1202 20:12:54.336774 6491 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 20:12:54.336789 6491 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 20:12:54.336829 6491 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 20:12:54.340526 6491 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:54.340554 6491 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:54.340564 6491 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:54.340596 6491 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 20:12:54.340619 6491 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:54.340646 6491 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:54.340686 6491 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:54.340735 6491 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:54.340791 6491 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:54.340821 6491 factory.go:656] Stopping watch factory\\\\nI1202 20:12:54.340835 6491 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 20:12:54.340843 6491 ovnkube.go:599] Stopped ovnkube\\\\nI1202 20:12:54.340794 6491 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.549678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.549748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.549768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.549795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.549816 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:57Z","lastTransitionTime":"2025-12-02T20:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.562649 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.580626 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.595747 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:57Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.652285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.652340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.652356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.652380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.652397 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:57Z","lastTransitionTime":"2025-12-02T20:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.755923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.755980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.755993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.756013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.756029 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:57Z","lastTransitionTime":"2025-12-02T20:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.859504 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.859548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.859558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.859578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.859592 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:57Z","lastTransitionTime":"2025-12-02T20:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.963609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.963692 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.963721 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.963757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:57 crc kubenswrapper[4796]: I1202 20:12:57.963782 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:57Z","lastTransitionTime":"2025-12-02T20:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.066890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.066934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.066945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.066962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.066973 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:58Z","lastTransitionTime":"2025-12-02T20:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.170459 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.170551 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.170580 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.170617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.170640 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:58Z","lastTransitionTime":"2025-12-02T20:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.273925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.274009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.274033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.274089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.274115 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:58Z","lastTransitionTime":"2025-12-02T20:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.377515 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.377558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.377569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.377586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.377596 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:58Z","lastTransitionTime":"2025-12-02T20:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.480728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.480820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.480845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.480880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.480904 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:58Z","lastTransitionTime":"2025-12-02T20:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.583916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.583996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.584028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.584061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.584081 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:58Z","lastTransitionTime":"2025-12-02T20:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.687368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.687431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.687446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.687476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.687493 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:58Z","lastTransitionTime":"2025-12-02T20:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.791542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.791614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.791633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.791663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.791684 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:58Z","lastTransitionTime":"2025-12-02T20:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.894859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.894928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.894949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.894979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.895000 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:58Z","lastTransitionTime":"2025-12-02T20:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.998566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.998631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.998645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.998668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:58 crc kubenswrapper[4796]: I1202 20:12:58.998684 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:58Z","lastTransitionTime":"2025-12-02T20:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.101662 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.101741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.101781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.101814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.101837 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.204773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.204833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.204863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.204879 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.204888 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.264172 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.264242 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.264305 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.264366 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:12:59 crc kubenswrapper[4796]: E1202 20:12:59.264493 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:12:59 crc kubenswrapper[4796]: E1202 20:12:59.264736 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:12:59 crc kubenswrapper[4796]: E1202 20:12:59.264867 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:12:59 crc kubenswrapper[4796]: E1202 20:12:59.265001 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.308182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.308231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.308244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.308296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.308317 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.411508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.411575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.411597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.411626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.411652 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.515377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.515467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.515493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.515526 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.515549 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.619522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.619605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.619628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.619664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.619687 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.655127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.655174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.655191 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.655216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.655233 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: E1202 20:12:59.670412 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:59Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.674073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.674113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.674157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.674182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.674201 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: E1202 20:12:59.690366 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:59Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.695856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.695896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.695914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.695938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.695960 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: E1202 20:12:59.715925 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:59Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.721397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.721495 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.721513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.721564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.721579 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: E1202 20:12:59.740111 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:59Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.744809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.744841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.744875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.744894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.744908 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: E1202 20:12:59.764112 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:12:59Z is after 2025-08-24T17:21:41Z" Dec 02 20:12:59 crc kubenswrapper[4796]: E1202 20:12:59.764417 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.766927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.767001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.767024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.767083 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.767109 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.870360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.870429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.870444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.870472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.870490 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.973402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.973513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.973528 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.973570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:12:59 crc kubenswrapper[4796]: I1202 20:12:59.973583 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:12:59Z","lastTransitionTime":"2025-12-02T20:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.076421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.076502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.076524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.076555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.076595 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:00Z","lastTransitionTime":"2025-12-02T20:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.179366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.179448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.179460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.179480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.179494 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:00Z","lastTransitionTime":"2025-12-02T20:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.281768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.281802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.281812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.281827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.281841 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:00Z","lastTransitionTime":"2025-12-02T20:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.385368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.385406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.385414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.385433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.385443 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:00Z","lastTransitionTime":"2025-12-02T20:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.488920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.488984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.488998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.489022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.489038 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:00Z","lastTransitionTime":"2025-12-02T20:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.592506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.592978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.592989 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.593009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.593022 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:00Z","lastTransitionTime":"2025-12-02T20:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.695573 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.695659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.695671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.695687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.695699 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:00Z","lastTransitionTime":"2025-12-02T20:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.797882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.797911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.797930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.797948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.797957 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:00Z","lastTransitionTime":"2025-12-02T20:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.900990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.901040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.901050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.901069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:00 crc kubenswrapper[4796]: I1202 20:13:00.901083 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:00Z","lastTransitionTime":"2025-12-02T20:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.004428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.004457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.004465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.004478 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.004487 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:01Z","lastTransitionTime":"2025-12-02T20:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.108545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.108627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.108650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.108694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.108719 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:01Z","lastTransitionTime":"2025-12-02T20:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.212303 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.212370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.212384 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.212407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.212447 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:01Z","lastTransitionTime":"2025-12-02T20:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.264536 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.264640 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.264685 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.264711 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:01 crc kubenswrapper[4796]: E1202 20:13:01.264869 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:01 crc kubenswrapper[4796]: E1202 20:13:01.265017 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:01 crc kubenswrapper[4796]: E1202 20:13:01.265180 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:01 crc kubenswrapper[4796]: E1202 20:13:01.265335 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.316021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.316093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.316111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.316145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.316165 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:01Z","lastTransitionTime":"2025-12-02T20:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.418705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.418758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.418769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.418789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.418801 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:01Z","lastTransitionTime":"2025-12-02T20:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.521921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.521995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.522004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.522047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.522059 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:01Z","lastTransitionTime":"2025-12-02T20:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.625115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.625180 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.625194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.625217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.625231 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:01Z","lastTransitionTime":"2025-12-02T20:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.728449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.728550 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.728574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.728613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.728637 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:01Z","lastTransitionTime":"2025-12-02T20:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.832009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.832079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.832105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.832134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.832152 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:01Z","lastTransitionTime":"2025-12-02T20:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.934293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.934340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.934378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.934398 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:01 crc kubenswrapper[4796]: I1202 20:13:01.934409 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:01Z","lastTransitionTime":"2025-12-02T20:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.037030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.037081 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.037115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.037132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.037142 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:02Z","lastTransitionTime":"2025-12-02T20:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.139993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.140059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.140069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.140088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.140098 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:02Z","lastTransitionTime":"2025-12-02T20:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.243271 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.243319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.243359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.243387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.243399 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:02Z","lastTransitionTime":"2025-12-02T20:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.273818 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.345443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.345486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.345495 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.345510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.345519 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:02Z","lastTransitionTime":"2025-12-02T20:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.447698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.447728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.447736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.447748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.447756 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:02Z","lastTransitionTime":"2025-12-02T20:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.549910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.549973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.549984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.549999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.550009 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:02Z","lastTransitionTime":"2025-12-02T20:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.652979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.653056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.653087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.653102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.653112 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:02Z","lastTransitionTime":"2025-12-02T20:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.755622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.755709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.755725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.755759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.755782 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:02Z","lastTransitionTime":"2025-12-02T20:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.857733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.857756 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.857764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.857777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.857787 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:02Z","lastTransitionTime":"2025-12-02T20:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.959425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.959463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.959473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.959487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:02 crc kubenswrapper[4796]: I1202 20:13:02.959497 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:02Z","lastTransitionTime":"2025-12-02T20:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.061327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.061579 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.061689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.061773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.061871 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:03Z","lastTransitionTime":"2025-12-02T20:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.164220 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.164269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.164296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.164312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.164324 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:03Z","lastTransitionTime":"2025-12-02T20:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.264591 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.264591 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.264697 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:03 crc kubenswrapper[4796]: E1202 20:13:03.264830 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.264934 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:03 crc kubenswrapper[4796]: E1202 20:13:03.264979 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:03 crc kubenswrapper[4796]: E1202 20:13:03.265019 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:03 crc kubenswrapper[4796]: E1202 20:13:03.265039 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.265920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.265943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.265951 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.265961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.265969 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:03Z","lastTransitionTime":"2025-12-02T20:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.370008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.370092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.370110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.370163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.370180 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:03Z","lastTransitionTime":"2025-12-02T20:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.473950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.473993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.474002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.474019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.474030 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:03Z","lastTransitionTime":"2025-12-02T20:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.577040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.577080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.577095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.577113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.577127 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:03Z","lastTransitionTime":"2025-12-02T20:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.679194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.679320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.679344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.679378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.679400 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:03Z","lastTransitionTime":"2025-12-02T20:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.782344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.782413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.782430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.782455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.782472 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:03Z","lastTransitionTime":"2025-12-02T20:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.885863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.885912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.885925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.885941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.885952 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:03Z","lastTransitionTime":"2025-12-02T20:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.988465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.988515 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.988524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.988540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:03 crc kubenswrapper[4796]: I1202 20:13:03.988555 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:03Z","lastTransitionTime":"2025-12-02T20:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.091926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.092034 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.092051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.092076 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.092095 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:04Z","lastTransitionTime":"2025-12-02T20:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.196025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.196100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.196136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.196171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.196196 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:04Z","lastTransitionTime":"2025-12-02T20:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.299307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.299375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.299395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.299421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.299437 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:04Z","lastTransitionTime":"2025-12-02T20:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.402651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.402696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.402706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.402723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.402734 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:04Z","lastTransitionTime":"2025-12-02T20:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.505224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.505320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.505342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.505371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.505399 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:04Z","lastTransitionTime":"2025-12-02T20:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.608116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.608156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.608167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.608207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.608220 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:04Z","lastTransitionTime":"2025-12-02T20:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.711502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.711563 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.711576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.711600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.711616 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:04Z","lastTransitionTime":"2025-12-02T20:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.815092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.815142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.815158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.815181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.815197 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:04Z","lastTransitionTime":"2025-12-02T20:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.917968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.918013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.918021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.918036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:04 crc kubenswrapper[4796]: I1202 20:13:04.918046 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:04Z","lastTransitionTime":"2025-12-02T20:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.020923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.020968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.020979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.020996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.021008 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:05Z","lastTransitionTime":"2025-12-02T20:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.123867 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.123926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.123934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.123948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.123956 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:05Z","lastTransitionTime":"2025-12-02T20:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.227185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.227230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.227242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.227280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.227294 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:05Z","lastTransitionTime":"2025-12-02T20:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.264966 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.265025 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.265058 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:05 crc kubenswrapper[4796]: E1202 20:13:05.265093 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.265317 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:05 crc kubenswrapper[4796]: E1202 20:13:05.265429 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:05 crc kubenswrapper[4796]: E1202 20:13:05.265511 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:05 crc kubenswrapper[4796]: E1202 20:13:05.265244 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.330343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.330388 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.330399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.330417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.330429 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:05Z","lastTransitionTime":"2025-12-02T20:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.429647 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:05 crc kubenswrapper[4796]: E1202 20:13:05.429888 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:13:05 crc kubenswrapper[4796]: E1202 20:13:05.430231 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs podName:60c1710d-bf66-4687-8ee7-ea828cde5d53 nodeName:}" failed. No retries permitted until 2025-12-02 20:13:37.430196533 +0000 UTC m=+100.433572277 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs") pod "network-metrics-daemon-g7nb5" (UID: "60c1710d-bf66-4687-8ee7-ea828cde5d53") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.433066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.433147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.433163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.433218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.433235 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:05Z","lastTransitionTime":"2025-12-02T20:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.536179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.536217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.536228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.536246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.536342 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:05Z","lastTransitionTime":"2025-12-02T20:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.639287 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.639352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.639364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.639381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.639395 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:05Z","lastTransitionTime":"2025-12-02T20:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.741925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.741962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.741974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.741991 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.742002 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:05Z","lastTransitionTime":"2025-12-02T20:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.844272 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.844309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.844322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.844338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.844348 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:05Z","lastTransitionTime":"2025-12-02T20:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.947207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.947571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.947665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.947762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:05 crc kubenswrapper[4796]: I1202 20:13:05.947840 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:05Z","lastTransitionTime":"2025-12-02T20:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.049953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.050000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.050013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.050031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.050048 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:06Z","lastTransitionTime":"2025-12-02T20:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.152530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.152595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.152610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.152636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.152651 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:06Z","lastTransitionTime":"2025-12-02T20:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.256066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.256122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.256140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.256168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.256188 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:06Z","lastTransitionTime":"2025-12-02T20:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.360232 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.360311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.360324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.360344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.360357 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:06Z","lastTransitionTime":"2025-12-02T20:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.462760 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.462810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.462823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.462842 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.462853 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:06Z","lastTransitionTime":"2025-12-02T20:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.564522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.564567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.564581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.564596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.564608 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:06Z","lastTransitionTime":"2025-12-02T20:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.648209 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m672l_03fe6ac0-1095-4336-a25c-4dd0d6e45053/kube-multus/0.log" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.648319 4796 generic.go:334] "Generic (PLEG): container finished" podID="03fe6ac0-1095-4336-a25c-4dd0d6e45053" containerID="31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a" exitCode=1 Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.648357 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m672l" event={"ID":"03fe6ac0-1095-4336-a25c-4dd0d6e45053","Type":"ContainerDied","Data":"31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a"} Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.648845 4796 scope.go:117] "RemoveContainer" containerID="31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.661182 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a525e-e8d2-4718-b378-db2a988fd4c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f232da7c9328c076303d93cdcc696870504e56361a76a1e8c3222978856d6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.670985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.671015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.671024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.671039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.671048 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:06Z","lastTransitionTime":"2025-12-02T20:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.679740 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.695478 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.714269 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.731053 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:13:05Z\\\",\\\"message\\\":\\\"2025-12-02T20:12:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_045b79a6-56ee-4f87-b021-6b86f8892f0b\\\\n2025-12-02T20:12:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_045b79a6-56ee-4f87-b021-6b86f8892f0b to /host/opt/cni/bin/\\\\n2025-12-02T20:12:20Z [verbose] multus-daemon started\\\\n2025-12-02T20:12:20Z [verbose] Readiness Indicator file check\\\\n2025-12-02T20:13:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.754309 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:54Z\\\",\\\"message\\\":\\\"\\\\nI1202 20:12:54.336774 6491 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 20:12:54.336789 6491 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 20:12:54.336829 6491 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 20:12:54.340526 6491 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:54.340554 6491 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:54.340564 6491 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:54.340596 6491 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 20:12:54.340619 6491 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:54.340646 6491 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:54.340686 6491 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:54.340735 6491 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:54.340791 6491 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:54.340821 6491 factory.go:656] Stopping watch factory\\\\nI1202 20:12:54.340835 6491 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 20:12:54.340843 6491 ovnkube.go:599] Stopped ovnkube\\\\nI1202 20:12:54.340794 6491 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.767601 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.773101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.773137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.773159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.773177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.773191 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:06Z","lastTransitionTime":"2025-12-02T20:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.783599 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.802985 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.817137 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.832720 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.848829 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.860735 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69e27721-7359-4707-aaaa-181eb24401e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6d808775c86045c4d0142fbb6f4e015251902b10a37c70bd2804c260c72b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d502f34530d387d0dca7b89c9b7cd1fea6e06b6ddf51ab1c58daefd8265e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8793493573dfb14c86ac89098d6872b1833e33f246947bedd5e5308a3d656f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.873486 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.876008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.876057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.876077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.876106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.876128 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:06Z","lastTransitionTime":"2025-12-02T20:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.886103 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.901897 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.922848 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.937489 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.951375 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:06Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.978980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.979025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.979044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.979073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:06 crc kubenswrapper[4796]: I1202 20:13:06.979094 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:06Z","lastTransitionTime":"2025-12-02T20:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.081952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.082176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.082185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.082207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.082220 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:07Z","lastTransitionTime":"2025-12-02T20:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.184907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.184957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.184971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.184990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.185004 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:07Z","lastTransitionTime":"2025-12-02T20:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.264304 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.264457 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.264652 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:07 crc kubenswrapper[4796]: E1202 20:13:07.264633 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.264708 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:07 crc kubenswrapper[4796]: E1202 20:13:07.264753 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:07 crc kubenswrapper[4796]: E1202 20:13:07.264843 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:07 crc kubenswrapper[4796]: E1202 20:13:07.264932 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.286494 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.287072 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.287385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.287406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.287433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.287457 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:07Z","lastTransitionTime":"2025-12-02T20:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.301555 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.315237 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69e27721-7359-4707-aaaa-181eb24401e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6d808775c86045c4d0142fbb6f4e015251902b10a37c70bd2804c260c72b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d502f34530d387d0dca7b89c9b7cd1fea6e06b6ddf51ab1c58daefd8265e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8793493573dfb14c86ac89098d6872b1833e33f246947bedd5e5308a3d656f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.327956 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.343830 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.361721 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.384573 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.389952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.390016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.390031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.390059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.390075 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:07Z","lastTransitionTime":"2025-12-02T20:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.395131 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.408222 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.418470 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a525e-e8d2-4718-b378-db2a988fd4c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f232da7c9328c076303d93cdcc696870504e56361a76a1e8c3222978856d6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.430626 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.442881 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.458312 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.472874 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:13:05Z\\\",\\\"message\\\":\\\"2025-12-02T20:12:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_045b79a6-56ee-4f87-b021-6b86f8892f0b\\\\n2025-12-02T20:12:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_045b79a6-56ee-4f87-b021-6b86f8892f0b to /host/opt/cni/bin/\\\\n2025-12-02T20:12:20Z [verbose] multus-daemon started\\\\n2025-12-02T20:12:20Z [verbose] Readiness Indicator file check\\\\n2025-12-02T20:13:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.492877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.492921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.492937 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.492953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.492963 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:07Z","lastTransitionTime":"2025-12-02T20:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.496492 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:54Z\\\",\\\"message\\\":\\\"\\\\nI1202 20:12:54.336774 6491 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 20:12:54.336789 6491 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 20:12:54.336829 6491 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 20:12:54.340526 6491 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:54.340554 6491 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:54.340564 6491 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:54.340596 6491 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 20:12:54.340619 6491 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:54.340646 6491 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:54.340686 6491 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:54.340735 6491 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:54.340791 6491 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:54.340821 6491 factory.go:656] Stopping watch factory\\\\nI1202 20:12:54.340835 6491 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 20:12:54.340843 6491 ovnkube.go:599] Stopped ovnkube\\\\nI1202 20:12:54.340794 6491 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.511822 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.523726 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.537201 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.553350 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.595323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.595354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.595366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.595383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.595393 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:07Z","lastTransitionTime":"2025-12-02T20:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.653799 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m672l_03fe6ac0-1095-4336-a25c-4dd0d6e45053/kube-multus/0.log" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.653871 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m672l" event={"ID":"03fe6ac0-1095-4336-a25c-4dd0d6e45053","Type":"ContainerStarted","Data":"e45c5c8b97a0407e37a1c8e704004a0752d9143a666d1ed8d5df6e954adc172a"} Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.677568 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:54Z\\\",\\\"message\\\":\\\"\\\\nI1202 20:12:54.336774 6491 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 20:12:54.336789 6491 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 20:12:54.336829 6491 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 20:12:54.340526 6491 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:54.340554 6491 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:54.340564 6491 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:54.340596 6491 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 20:12:54.340619 6491 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:54.340646 6491 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:54.340686 6491 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:54.340735 6491 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:54.340791 6491 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:54.340821 6491 factory.go:656] Stopping watch factory\\\\nI1202 20:12:54.340835 6491 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 20:12:54.340843 6491 ovnkube.go:599] Stopped ovnkube\\\\nI1202 20:12:54.340794 6491 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.688204 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.697801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.697830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.697840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.697856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.697865 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:07Z","lastTransitionTime":"2025-12-02T20:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.699994 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a525e-e8d2-4718-b378-db2a988fd4c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f232da7c9328c076303d93cdcc696870504e56361a76a1e8c3222978856d6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.714298 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.726379 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.739551 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.752676 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45c5c8b97a0407e37a1c8e704004a0752d9143a666d1ed8d5df6e954adc172a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:13:05Z\\\",\\\"message\\\":\\\"2025-12-02T20:12:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_045b79a6-56ee-4f87-b021-6b86f8892f0b\\\\n2025-12-02T20:12:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_045b79a6-56ee-4f87-b021-6b86f8892f0b to /host/opt/cni/bin/\\\\n2025-12-02T20:12:20Z [verbose] multus-daemon started\\\\n2025-12-02T20:12:20Z [verbose] Readiness Indicator file check\\\\n2025-12-02T20:13:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:13:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.768334 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.790582 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.801156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.801223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.801243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.801319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.801347 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:07Z","lastTransitionTime":"2025-12-02T20:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.809233 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.826309 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.844440 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.860598 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.874666 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69e27721-7359-4707-aaaa-181eb24401e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6d808775c86045c4d0142fbb6f4e015251902b10a37c70bd2804c260c72b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d502f34530d387d0dca7b89c9b7cd1fea6e06b6ddf51ab1c58daefd8265e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8793493573dfb14c86ac89098d6872b1833e33f246947bedd5e5308a3d656f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.887760 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.899547 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.904550 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.904616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.904628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.904646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.904657 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:07Z","lastTransitionTime":"2025-12-02T20:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.920056 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.933115 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:07 crc kubenswrapper[4796]: I1202 20:13:07.950039 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:07Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.007957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.007992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.008001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.008015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.008027 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:08Z","lastTransitionTime":"2025-12-02T20:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.110946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.110987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.110997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.111014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.111025 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:08Z","lastTransitionTime":"2025-12-02T20:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.214671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.214710 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.214719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.214733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.214743 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:08Z","lastTransitionTime":"2025-12-02T20:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.317750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.317798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.317809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.317827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.317837 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:08Z","lastTransitionTime":"2025-12-02T20:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.420935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.421008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.421023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.421058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.421074 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:08Z","lastTransitionTime":"2025-12-02T20:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.524899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.524947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.524959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.524978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.524990 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:08Z","lastTransitionTime":"2025-12-02T20:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.627837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.627890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.627907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.627926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.627938 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:08Z","lastTransitionTime":"2025-12-02T20:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.731240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.731313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.731323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.731340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.731352 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:08Z","lastTransitionTime":"2025-12-02T20:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.835579 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.835658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.835680 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.835713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.835739 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:08Z","lastTransitionTime":"2025-12-02T20:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.940395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.940455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.940471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.940499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:08 crc kubenswrapper[4796]: I1202 20:13:08.940519 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:08Z","lastTransitionTime":"2025-12-02T20:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.044878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.044950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.044999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.045031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.045052 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.149608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.149666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.149680 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.149705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.149725 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.259322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.259379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.259396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.259420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.259438 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.264832 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:09 crc kubenswrapper[4796]: E1202 20:13:09.264981 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.265107 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.265161 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.265332 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:09 crc kubenswrapper[4796]: E1202 20:13:09.265426 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:09 crc kubenswrapper[4796]: E1202 20:13:09.265507 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:09 crc kubenswrapper[4796]: E1202 20:13:09.265627 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.265735 4796 scope.go:117] "RemoveContainer" containerID="cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5" Dec 02 20:13:09 crc kubenswrapper[4796]: E1202 20:13:09.265889 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.362476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.362532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.362545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.362566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.362583 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.465071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.465122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.465133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.465155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.465169 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.567857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.567885 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.567895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.567910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.567921 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.671003 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.671652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.671674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.671813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.671838 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.774102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.774148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.774162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.774177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.774190 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.788095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.788130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.788144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.788159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.788170 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: E1202 20:13:09.803643 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:09Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.807529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.807582 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.807603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.807631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.807648 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: E1202 20:13:09.825833 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:09Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.830445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.830495 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.830508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.830532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.830551 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: E1202 20:13:09.843855 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:09Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.847426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.847547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.847634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.847751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.847841 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: E1202 20:13:09.859380 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:09Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.862914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.862946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.862955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.862968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.862978 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: E1202 20:13:09.876684 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:09Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:09 crc kubenswrapper[4796]: E1202 20:13:09.876795 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.878400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.878423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.878433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.878448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.878459 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.980564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.980623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.980644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.980671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:09 crc kubenswrapper[4796]: I1202 20:13:09.980688 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:09Z","lastTransitionTime":"2025-12-02T20:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.082915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.082948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.082961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.082976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.082988 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:10Z","lastTransitionTime":"2025-12-02T20:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.186097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.186166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.186175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.186193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.186204 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:10Z","lastTransitionTime":"2025-12-02T20:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.289194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.289245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.289273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.289291 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.289302 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:10Z","lastTransitionTime":"2025-12-02T20:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.392472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.392533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.392550 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.392577 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.392591 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:10Z","lastTransitionTime":"2025-12-02T20:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.495958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.496020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.496038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.496065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.496086 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:10Z","lastTransitionTime":"2025-12-02T20:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.599308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.599361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.599377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.599397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.599410 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:10Z","lastTransitionTime":"2025-12-02T20:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.702204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.702318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.702340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.702366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.702381 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:10Z","lastTransitionTime":"2025-12-02T20:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.805456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.805512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.805524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.805544 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.805558 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:10Z","lastTransitionTime":"2025-12-02T20:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.911391 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.911478 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.911499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.911531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:10 crc kubenswrapper[4796]: I1202 20:13:10.911550 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:10Z","lastTransitionTime":"2025-12-02T20:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.015581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.015652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.015673 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.015700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.015721 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:11Z","lastTransitionTime":"2025-12-02T20:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.118141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.118194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.118210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.118231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.118246 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:11Z","lastTransitionTime":"2025-12-02T20:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.220777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.220841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.220855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.220880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.220894 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:11Z","lastTransitionTime":"2025-12-02T20:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.264518 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.264653 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.264518 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:11 crc kubenswrapper[4796]: E1202 20:13:11.264760 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.264653 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:11 crc kubenswrapper[4796]: E1202 20:13:11.264912 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:11 crc kubenswrapper[4796]: E1202 20:13:11.265036 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:11 crc kubenswrapper[4796]: E1202 20:13:11.265177 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.324578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.324665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.324693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.324730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.324758 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:11Z","lastTransitionTime":"2025-12-02T20:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.427754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.428086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.428172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.428288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.428364 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:11Z","lastTransitionTime":"2025-12-02T20:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.531838 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.532193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.532294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.532403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.532513 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:11Z","lastTransitionTime":"2025-12-02T20:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.636617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.636684 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.636698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.636720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.636736 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:11Z","lastTransitionTime":"2025-12-02T20:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.740749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.740840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.740862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.740898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.740918 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:11Z","lastTransitionTime":"2025-12-02T20:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.844386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.844471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.844503 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.844540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.844565 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:11Z","lastTransitionTime":"2025-12-02T20:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.947823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.947892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.947911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.947943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:11 crc kubenswrapper[4796]: I1202 20:13:11.947968 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:11Z","lastTransitionTime":"2025-12-02T20:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.051959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.052042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.052062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.052094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.052113 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:12Z","lastTransitionTime":"2025-12-02T20:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.155436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.155500 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.155525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.155558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.155583 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:12Z","lastTransitionTime":"2025-12-02T20:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.258801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.258862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.258873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.258897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.258912 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:12Z","lastTransitionTime":"2025-12-02T20:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.362502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.362577 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.362597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.362629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.362651 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:12Z","lastTransitionTime":"2025-12-02T20:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.466910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.466981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.467000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.467030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.467050 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:12Z","lastTransitionTime":"2025-12-02T20:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.570607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.570694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.570714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.570744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.570770 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:12Z","lastTransitionTime":"2025-12-02T20:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.673772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.673863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.673892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.673930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.673958 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:12Z","lastTransitionTime":"2025-12-02T20:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.776794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.776875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.776893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.776923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.776947 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:12Z","lastTransitionTime":"2025-12-02T20:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.880831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.880948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.880967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.880997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.881017 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:12Z","lastTransitionTime":"2025-12-02T20:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.984531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.984884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.984915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.984956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:12 crc kubenswrapper[4796]: I1202 20:13:12.984978 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:12Z","lastTransitionTime":"2025-12-02T20:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.089201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.089276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.089288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.089309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.089320 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:13Z","lastTransitionTime":"2025-12-02T20:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.193450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.193544 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.193570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.193605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.193625 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:13Z","lastTransitionTime":"2025-12-02T20:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.265011 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.265091 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.265162 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:13 crc kubenswrapper[4796]: E1202 20:13:13.265319 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.265370 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:13 crc kubenswrapper[4796]: E1202 20:13:13.265576 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:13 crc kubenswrapper[4796]: E1202 20:13:13.265766 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:13 crc kubenswrapper[4796]: E1202 20:13:13.265881 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.297539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.297597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.297620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.297652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.297678 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:13Z","lastTransitionTime":"2025-12-02T20:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.401049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.401124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.401143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.401170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.401187 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:13Z","lastTransitionTime":"2025-12-02T20:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.505376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.505463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.505482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.505517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.505597 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:13Z","lastTransitionTime":"2025-12-02T20:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.609669 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.609751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.609770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.609804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.609825 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:13Z","lastTransitionTime":"2025-12-02T20:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.712905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.712964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.712976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.713006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.713019 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:13Z","lastTransitionTime":"2025-12-02T20:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.816720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.816802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.816822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.816853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.816874 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:13Z","lastTransitionTime":"2025-12-02T20:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.920911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.920975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.920987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.921011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:13 crc kubenswrapper[4796]: I1202 20:13:13.921027 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:13Z","lastTransitionTime":"2025-12-02T20:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.024489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.024565 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.024585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.024615 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.024636 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:14Z","lastTransitionTime":"2025-12-02T20:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.128142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.128229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.128247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.128313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.128336 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:14Z","lastTransitionTime":"2025-12-02T20:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.231217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.231360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.231389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.231422 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.231445 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:14Z","lastTransitionTime":"2025-12-02T20:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.333817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.333859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.333871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.333887 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.333900 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:14Z","lastTransitionTime":"2025-12-02T20:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.437810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.437871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.437894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.437928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.437956 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:14Z","lastTransitionTime":"2025-12-02T20:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.541007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.541087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.541113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.541151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.541176 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:14Z","lastTransitionTime":"2025-12-02T20:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.644566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.644671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.644701 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.644736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.644759 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:14Z","lastTransitionTime":"2025-12-02T20:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.748550 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.748619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.748639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.748668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.748688 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:14Z","lastTransitionTime":"2025-12-02T20:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.851910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.852321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.852340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.852367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.852383 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:14Z","lastTransitionTime":"2025-12-02T20:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.956454 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.956575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.956597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.956632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:14 crc kubenswrapper[4796]: I1202 20:13:14.956658 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:14Z","lastTransitionTime":"2025-12-02T20:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.060702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.060776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.060794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.060823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.060848 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:15Z","lastTransitionTime":"2025-12-02T20:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.165577 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.165686 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.165749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.165789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.165815 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:15Z","lastTransitionTime":"2025-12-02T20:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.264774 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.264874 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.264887 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.264783 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:15 crc kubenswrapper[4796]: E1202 20:13:15.265067 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:15 crc kubenswrapper[4796]: E1202 20:13:15.265311 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:15 crc kubenswrapper[4796]: E1202 20:13:15.265483 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:15 crc kubenswrapper[4796]: E1202 20:13:15.265641 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.269801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.269870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.269888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.269915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.269935 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:15Z","lastTransitionTime":"2025-12-02T20:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.374131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.374222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.374248 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.374322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.374352 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:15Z","lastTransitionTime":"2025-12-02T20:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.478167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.478218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.478233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.478278 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.478295 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:15Z","lastTransitionTime":"2025-12-02T20:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.582959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.583063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.583089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.583130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.583164 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:15Z","lastTransitionTime":"2025-12-02T20:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.687023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.687108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.687139 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.687170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.687189 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:15Z","lastTransitionTime":"2025-12-02T20:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.790530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.790589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.790608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.790645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.790666 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:15Z","lastTransitionTime":"2025-12-02T20:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.894296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.894436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.894455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.894489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.894510 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:15Z","lastTransitionTime":"2025-12-02T20:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.997303 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.997382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.997406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.997435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:15 crc kubenswrapper[4796]: I1202 20:13:15.997458 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:15Z","lastTransitionTime":"2025-12-02T20:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.101048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.101133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.101150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.101181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.101201 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:16Z","lastTransitionTime":"2025-12-02T20:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.204947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.205023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.205045 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.205075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.205097 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:16Z","lastTransitionTime":"2025-12-02T20:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.309015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.309116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.309146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.309184 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.309208 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:16Z","lastTransitionTime":"2025-12-02T20:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.413089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.413180 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.413202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.413229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.413248 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:16Z","lastTransitionTime":"2025-12-02T20:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.517631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.517727 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.517751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.517790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.517821 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:16Z","lastTransitionTime":"2025-12-02T20:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.621018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.621147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.621174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.621210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.621235 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:16Z","lastTransitionTime":"2025-12-02T20:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.724374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.724440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.724466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.724501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.724533 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:16Z","lastTransitionTime":"2025-12-02T20:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.828524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.828629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.828655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.828693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.828720 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:16Z","lastTransitionTime":"2025-12-02T20:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.932680 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.932752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.932779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.932814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:16 crc kubenswrapper[4796]: I1202 20:13:16.932845 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:16Z","lastTransitionTime":"2025-12-02T20:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.036715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.036771 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.036786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.036810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.036827 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:17Z","lastTransitionTime":"2025-12-02T20:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.140555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.140668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.140689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.140733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.140756 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:17Z","lastTransitionTime":"2025-12-02T20:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.244326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.244411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.244432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.244464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.244488 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:17Z","lastTransitionTime":"2025-12-02T20:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.264109 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.264158 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.264306 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:17 crc kubenswrapper[4796]: E1202 20:13:17.264442 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.264481 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:17 crc kubenswrapper[4796]: E1202 20:13:17.264619 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:17 crc kubenswrapper[4796]: E1202 20:13:17.264792 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:17 crc kubenswrapper[4796]: E1202 20:13:17.264943 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.285799 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a525e-e8d2-4718-b378-db2a988fd4c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f232da7c9328c076303d93cdcc696870504e56361a76a1e8c3222978856d6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.310384 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.330668 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.347057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.347145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.347170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.347208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.347238 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:17Z","lastTransitionTime":"2025-12-02T20:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.361242 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.377381 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45c5c8b97a0407e37a1c8e704004a0752d9143a666d1ed8d5df6e954adc172a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:13:05Z\\\",\\\"message\\\":\\\"2025-12-02T20:12:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_045b79a6-56ee-4f87-b021-6b86f8892f0b\\\\n2025-12-02T20:12:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_045b79a6-56ee-4f87-b021-6b86f8892f0b to /host/opt/cni/bin/\\\\n2025-12-02T20:12:20Z [verbose] multus-daemon started\\\\n2025-12-02T20:12:20Z [verbose] Readiness Indicator file check\\\\n2025-12-02T20:13:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:13:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.409322 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:54Z\\\",\\\"message\\\":\\\"\\\\nI1202 20:12:54.336774 6491 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 20:12:54.336789 6491 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 20:12:54.336829 6491 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 20:12:54.340526 6491 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:54.340554 6491 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:54.340564 6491 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:54.340596 6491 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 20:12:54.340619 6491 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:54.340646 6491 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:54.340686 6491 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:54.340735 6491 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:54.340791 6491 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:54.340821 6491 factory.go:656] Stopping watch factory\\\\nI1202 20:12:54.340835 6491 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 20:12:54.340843 6491 ovnkube.go:599] Stopped ovnkube\\\\nI1202 20:12:54.340794 6491 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.424911 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.439987 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.450401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.450473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.450500 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.450537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.450562 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:17Z","lastTransitionTime":"2025-12-02T20:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.467888 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.490020 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.514311 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.531721 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.552602 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69e27721-7359-4707-aaaa-181eb24401e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6d808775c86045c4d0142fbb6f4e015251902b10a37c70bd2804c260c72b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d502f34530d387d0dca7b89c9b7cd1fea6e06b6ddf51ab1c58daefd8265e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8793493573dfb14c86ac89098d6872b1833e33f246947bedd5e5308a3d656f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.554612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.554680 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.554700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.554727 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.554746 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:17Z","lastTransitionTime":"2025-12-02T20:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.574086 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.595124 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.613701 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.656095 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.658622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.658662 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.658698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.658722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.658738 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:17Z","lastTransitionTime":"2025-12-02T20:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.675928 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.695910 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:17Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.761439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.761497 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.761517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.761548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.761567 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:17Z","lastTransitionTime":"2025-12-02T20:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.864808 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.864860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.864877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.864899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.864919 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:17Z","lastTransitionTime":"2025-12-02T20:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.968472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.968557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.968580 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.968611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:17 crc kubenswrapper[4796]: I1202 20:13:17.968628 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:17Z","lastTransitionTime":"2025-12-02T20:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.071975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.072029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.072041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.072064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.072080 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:18Z","lastTransitionTime":"2025-12-02T20:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.175468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.175547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.175564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.175592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.175614 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:18Z","lastTransitionTime":"2025-12-02T20:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.279517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.279604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.279637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.279670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.279696 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:18Z","lastTransitionTime":"2025-12-02T20:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.383281 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.383347 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.383367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.383409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.383433 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:18Z","lastTransitionTime":"2025-12-02T20:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.488093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.488363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.488449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.488532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.489058 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:18Z","lastTransitionTime":"2025-12-02T20:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.592798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.592868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.592885 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.592924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.592942 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:18Z","lastTransitionTime":"2025-12-02T20:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.696934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.696987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.697004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.697033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.697051 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:18Z","lastTransitionTime":"2025-12-02T20:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.800446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.800518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.800537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.800567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.800590 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:18Z","lastTransitionTime":"2025-12-02T20:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.903567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.903641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.903670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.903710 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:18 crc kubenswrapper[4796]: I1202 20:13:18.903739 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:18Z","lastTransitionTime":"2025-12-02T20:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.007309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.007382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.007407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.007439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.007463 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:19Z","lastTransitionTime":"2025-12-02T20:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.112533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.112614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.112633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.112664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.112692 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:19Z","lastTransitionTime":"2025-12-02T20:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.182098 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.182315 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.182417 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.182509 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:23.182445754 +0000 UTC m=+146.185821328 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.182585 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.182696 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.182703 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:14:23.182667889 +0000 UTC m=+146.186043583 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.182874 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 20:14:23.182844503 +0000 UTC m=+146.186220257 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.216043 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.216139 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.216166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.216205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.216232 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:19Z","lastTransitionTime":"2025-12-02T20:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.264756 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.265046 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.265122 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.265179 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.265122 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.265362 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.265475 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.265559 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.284013 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.284125 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.284385 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.284429 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.284443 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.284452 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.284479 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.284504 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.284571 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 20:14:23.284537215 +0000 UTC m=+146.287912779 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:13:19 crc kubenswrapper[4796]: E1202 20:13:19.284614 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 20:14:23.284599887 +0000 UTC m=+146.287975461 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.321144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.321221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.321239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.321320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.321345 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:19Z","lastTransitionTime":"2025-12-02T20:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.425936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.425995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.426006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.426029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.426042 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:19Z","lastTransitionTime":"2025-12-02T20:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.530163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.530339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.530372 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.530415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.530438 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:19Z","lastTransitionTime":"2025-12-02T20:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.633965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.634039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.634056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.634082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.634102 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:19Z","lastTransitionTime":"2025-12-02T20:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.737755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.737844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.737864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.737907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.737929 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:19Z","lastTransitionTime":"2025-12-02T20:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.842871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.842949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.842972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.843004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.843027 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:19Z","lastTransitionTime":"2025-12-02T20:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.945822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.945872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.945882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.945902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:19 crc kubenswrapper[4796]: I1202 20:13:19.945916 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:19Z","lastTransitionTime":"2025-12-02T20:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.049430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.049503 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.049522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.049550 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.049571 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.063796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.063851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.063871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.063896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.063919 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: E1202 20:13:20.087744 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.093862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.093946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.093984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.094051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.094081 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: E1202 20:13:20.116220 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.121921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.121996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.122013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.122038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.122053 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: E1202 20:13:20.140199 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.146344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.146518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.146543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.146575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.146597 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: E1202 20:13:20.167771 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.174056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.174137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.174155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.174185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.174206 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: E1202 20:13:20.196603 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"45bad139-cd57-490c-a638-731a42709a6c\\\",\\\"systemUUID\\\":\\\"9fca617c-b4ed-442d-9d01-94fab08be868\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:20Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:20 crc kubenswrapper[4796]: E1202 20:13:20.196834 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.199334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.199406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.199425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.199477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.199496 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.303662 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.303726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.303743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.303775 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.303794 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.407500 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.407566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.407585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.407614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.407632 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.511860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.511928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.511952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.511988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.512013 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.615544 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.615628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.615649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.615688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.615713 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.719369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.719467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.719490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.719521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.719544 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.823575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.823645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.823663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.823691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.823712 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.927751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.927815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.927832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.927858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:20 crc kubenswrapper[4796]: I1202 20:13:20.927877 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:20Z","lastTransitionTime":"2025-12-02T20:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.031353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.031429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.031448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.031482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.031502 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:21Z","lastTransitionTime":"2025-12-02T20:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.135210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.135247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.135320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.135339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.135355 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:21Z","lastTransitionTime":"2025-12-02T20:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.238152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.238240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.238280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.238309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.238322 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:21Z","lastTransitionTime":"2025-12-02T20:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.264771 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.264899 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.264961 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:21 crc kubenswrapper[4796]: E1202 20:13:21.265490 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:21 crc kubenswrapper[4796]: E1202 20:13:21.265641 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.265722 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:21 crc kubenswrapper[4796]: E1202 20:13:21.265862 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:21 crc kubenswrapper[4796]: E1202 20:13:21.266650 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.267283 4796 scope.go:117] "RemoveContainer" containerID="cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.341991 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.342066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.342093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.342123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.342145 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:21Z","lastTransitionTime":"2025-12-02T20:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.449580 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.449632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.449646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.449667 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.449681 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:21Z","lastTransitionTime":"2025-12-02T20:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.553295 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.553356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.553370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.553393 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.553408 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:21Z","lastTransitionTime":"2025-12-02T20:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.656171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.656244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.656295 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.656332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.656354 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:21Z","lastTransitionTime":"2025-12-02T20:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.759729 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.759800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.759819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.759850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.759872 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:21Z","lastTransitionTime":"2025-12-02T20:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.864188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.864287 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.864310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.864338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.864357 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:21Z","lastTransitionTime":"2025-12-02T20:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.968142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.968218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.968242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.968303 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:21 crc kubenswrapper[4796]: I1202 20:13:21.968325 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:21Z","lastTransitionTime":"2025-12-02T20:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.072136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.072221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.072246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.072315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.072343 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:22Z","lastTransitionTime":"2025-12-02T20:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.175783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.175864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.175884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.175916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.175937 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:22Z","lastTransitionTime":"2025-12-02T20:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.279605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.279706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.279723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.279751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.279768 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:22Z","lastTransitionTime":"2025-12-02T20:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.383185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.383323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.383350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.383411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.383428 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:22Z","lastTransitionTime":"2025-12-02T20:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.487440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.487558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.487576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.487605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.487653 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:22Z","lastTransitionTime":"2025-12-02T20:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.591439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.591489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.591502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.591523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.591535 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:22Z","lastTransitionTime":"2025-12-02T20:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.695538 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.695625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.695639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.695664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.695681 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:22Z","lastTransitionTime":"2025-12-02T20:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.719724 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/2.log" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.723314 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"f0a2cf28b941d4231e586fdbf9805047b83aa21ad325c180afbee60a6d5227c5"} Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.724272 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.744096 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a525e-e8d2-4718-b378-db2a988fd4c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f232da7c9328c076303d93cdcc696870504e56361a76a1e8c3222978856d6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.769300 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.791239 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.798677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.798754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.798790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.798815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.798833 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:22Z","lastTransitionTime":"2025-12-02T20:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.814627 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.838074 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45c5c8b97a0407e37a1c8e704004a0752d9143a666d1ed8d5df6e954adc172a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:13:05Z\\\",\\\"message\\\":\\\"2025-12-02T20:12:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_045b79a6-56ee-4f87-b021-6b86f8892f0b\\\\n2025-12-02T20:12:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_045b79a6-56ee-4f87-b021-6b86f8892f0b to /host/opt/cni/bin/\\\\n2025-12-02T20:12:20Z [verbose] multus-daemon started\\\\n2025-12-02T20:12:20Z [verbose] Readiness Indicator file check\\\\n2025-12-02T20:13:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:13:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.864762 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a2cf28b941d4231e586fdbf9805047b83aa21ad325c180afbee60a6d5227c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:54Z\\\",\\\"message\\\":\\\"\\\\nI1202 20:12:54.336774 6491 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 20:12:54.336789 6491 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 20:12:54.336829 6491 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 20:12:54.340526 6491 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:54.340554 6491 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:54.340564 6491 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:54.340596 6491 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 20:12:54.340619 6491 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:54.340646 6491 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:54.340686 6491 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:54.340735 6491 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:54.340791 6491 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:54.340821 6491 factory.go:656] Stopping watch factory\\\\nI1202 20:12:54.340835 6491 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 20:12:54.340843 6491 ovnkube.go:599] Stopped ovnkube\\\\nI1202 20:12:54.340794 6491 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.880598 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.901960 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.902358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.902401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.902413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.902434 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.902448 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:22Z","lastTransitionTime":"2025-12-02T20:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.927848 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.947029 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.972343 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:22 crc kubenswrapper[4796]: I1202 20:13:22.989961 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:22Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.006520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.006587 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.006600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.006622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.006635 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:23Z","lastTransitionTime":"2025-12-02T20:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.007445 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69e27721-7359-4707-aaaa-181eb24401e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6d808775c86045c4d0142fbb6f4e015251902b10a37c70bd2804c260c72b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d502f34530d387d0dca7b89c9b7cd1fea6e06b6ddf51ab1c58daefd8265e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8793493573dfb14c86ac89098d6872b1833e33f246947bedd5e5308a3d656f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.027882 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.048742 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.067454 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.110200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.110310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.110335 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.110371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.110395 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:23Z","lastTransitionTime":"2025-12-02T20:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.118849 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.142998 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.170458 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.217064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.217135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.217154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.217181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.217203 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:23Z","lastTransitionTime":"2025-12-02T20:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.265013 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.265101 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.265186 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.265327 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:23 crc kubenswrapper[4796]: E1202 20:13:23.266287 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:23 crc kubenswrapper[4796]: E1202 20:13:23.266383 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:23 crc kubenswrapper[4796]: E1202 20:13:23.266425 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:23 crc kubenswrapper[4796]: E1202 20:13:23.266578 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.320399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.320447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.320458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.320483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.320497 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:23Z","lastTransitionTime":"2025-12-02T20:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.423516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.423617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.423650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.423687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.423713 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:23Z","lastTransitionTime":"2025-12-02T20:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.527965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.528033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.528053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.528082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.528102 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:23Z","lastTransitionTime":"2025-12-02T20:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.632925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.632997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.633031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.633052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.633061 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:23Z","lastTransitionTime":"2025-12-02T20:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.730849 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/3.log" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.732079 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/2.log" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.735486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.735534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.735553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.735581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.735600 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:23Z","lastTransitionTime":"2025-12-02T20:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.737142 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="f0a2cf28b941d4231e586fdbf9805047b83aa21ad325c180afbee60a6d5227c5" exitCode=1 Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.737231 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"f0a2cf28b941d4231e586fdbf9805047b83aa21ad325c180afbee60a6d5227c5"} Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.737355 4796 scope.go:117] "RemoveContainer" containerID="cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.740320 4796 scope.go:117] "RemoveContainer" containerID="f0a2cf28b941d4231e586fdbf9805047b83aa21ad325c180afbee60a6d5227c5" Dec 02 20:13:23 crc kubenswrapper[4796]: E1202 20:13:23.740687 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.763108 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecebc7be87a10c18ae7e1424f0a3e6cf082cb50d0a18be5445d6e27300be5b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e826327af5e2a3f48eb37e299415b07a1ebbf794c2d2877b3741e3d49534b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.790683 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mzw77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c12557c-4c0b-4e9e-b954-a8bc04b9ecdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93502952b6cfea496c2427f936ff031762e3577ca74b4eee4998e3800f768211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18c0e450a0f857a10bfb347c1e1bea247401e7dcfbf824b121eeb913d5c12a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5ed955802e5adb8efdc0138bbb35ffbe1e24fc6e956a894bb90398b57a46c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb2af52c7d1826c1e30107a7c4f3b4863abb12436988f2351465ee1a9dd65634\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a7a26c995435445ad08838742ab391a1717b1ead90ee342728f81ed2e48ddf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbbf0405c4f3d2ac82f9f29871b84cfa3a2a6c4d3984298e66a0a3a8fc62f3d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15cddde299c3bc486baf31ef9b0e1b31adb6209a7def13a85104396dccaa126f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kxk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mzw77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.811629 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985789d-ae41-4ae1-938a-8f60820a303c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4176f1cf86fa360a7a980992974166a8ea22b7f6f0ba8b28539f3e411c9511eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://735f79154356ab35adf831f1f36fd0956463bf969e88e69508401c8976d24917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8qs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r7pwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.835078 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.839067 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.839118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.839139 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.839168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.839187 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:23Z","lastTransitionTime":"2025-12-02T20:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.858411 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64bd216ef667b40cff69e55295ff7ac4d61f8e893dc1f24f12f42aed5463417b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.879936 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c1710d-bf66-4687-8ee7-ea828cde5d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8g97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g7nb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.904163 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423e1ef0-d2b1-442c-8751-372d2de26a00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 20:12:09.715158 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 20:12:09.723356 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1998965949/tls.crt::/tmp/serving-cert-1998965949/tls.key\\\\\\\"\\\\nI1202 20:12:15.056297 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 20:12:15.060468 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 20:12:15.060506 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 20:12:15.060526 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 20:12:15.060532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 20:12:15.077181 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1202 20:12:15.077200 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1202 20:12:15.077201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077211 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 20:12:15.077216 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 20:12:15.077220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 20:12:15.077223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 20:12:15.077226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 20:12:15.079033 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.922873 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9a3760c-5324-4e7c-ba6d-947d4200fd7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca809eaaaad32548ccb944ec8a40abe7dc6349d1e4b1640e53d5cd1aba9e8798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://567f06ef173b505801a2b909e2ffd85973c697cbd3d89c10a5b118247c58c193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bce1cb4a264ff1c25ca77f3e4c9aa251e20002d987f4e1aa61fce2c99799f42\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.943487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.943567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.943593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.943629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.943656 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:23Z","lastTransitionTime":"2025-12-02T20:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.949794 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69e27721-7359-4707-aaaa-181eb24401e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd6d808775c86045c4d0142fbb6f4e015251902b10a37c70bd2804c260c72b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d502f34530d387d0dca7b89c9b7cd1fea6e06b6ddf51ab1c58daefd8265e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae8793493573dfb14c86ac89098d6872b1833e33f246947bedd5e5308a3d656f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36d3a2ede3aa51b49c1512f62e61632e2ef39f384c8d25fec122ece877c7dad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:23 crc kubenswrapper[4796]: I1202 20:13:23.988458 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8cae09d-e96b-4d09-b52e-0914be7de1b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c89bf933f9cea43b887da33832efb415751c7e4b2c920514543b1e30a2a584c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fc05d7f51b4dd38d72107bd79f5c2e1ecfd31843e251bbeb816545d3e491a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f59840a8a600786748b3a153bf8b1a784a55c3b43ffd837c5deaf1d14d15cbed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://417274ef4639b491835a4ae4d0df778270cdb94ee594d2ff8bfdd86b794ca488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd0e59c5c92c3ed21d1c83f3d929d9294fc86b7d7518bc1ed5091fc7c055b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e1cd449dbfdd1b27eff312ed4337c8f3039626514f5ae015a86f1fb107f94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2228c8778db4fb6da9a43f715c6e86cc8f39d1f477c4f4ad24163f719e9834f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f38bfa7f7c2c17e930fffd1d12f720ce733316d708cd5a47e730a5c1f2daf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:23Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.009215 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64bc04e9-c9fc-4a80-98de-59c88457ace6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9693d017132303f9c7998ef2e45ebe24e58350336b436a84e0a3d1260b8d42d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpzx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.030545 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5558dc7c-93f9-4212-bf22-fdec743e47ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a71fbcbe900da771fbfbdbe666a0770e052c398e317c196741da6868bac278c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkp68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wzhpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.047160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.047300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.047329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.047367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.047395 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:24Z","lastTransitionTime":"2025-12-02T20:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.050963 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.073634 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m672l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03fe6ac0-1095-4336-a25c-4dd0d6e45053\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:13:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45c5c8b97a0407e37a1c8e704004a0752d9143a666d1ed8d5df6e954adc172a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:13:05Z\\\",\\\"message\\\":\\\"2025-12-02T20:12:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_045b79a6-56ee-4f87-b021-6b86f8892f0b\\\\n2025-12-02T20:12:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_045b79a6-56ee-4f87-b021-6b86f8892f0b to /host/opt/cni/bin/\\\\n2025-12-02T20:12:20Z [verbose] multus-daemon started\\\\n2025-12-02T20:12:20Z [verbose] Readiness Indicator file check\\\\n2025-12-02T20:13:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:13:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nnfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m672l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.104860 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87a81d4f-9cb5-40b1-93cf-5691b915a68e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a2cf28b941d4231e586fdbf9805047b83aa21ad325c180afbee60a6d5227c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf1e7fbc49ae946f030af5c351df582824c1cf0a10dacb194d661d3353563ff5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:12:54Z\\\",\\\"message\\\":\\\"\\\\nI1202 20:12:54.336774 6491 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 20:12:54.336789 6491 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 20:12:54.336829 6491 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 20:12:54.340526 6491 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 20:12:54.340554 6491 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 20:12:54.340564 6491 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 20:12:54.340596 6491 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 20:12:54.340619 6491 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 20:12:54.340646 6491 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 20:12:54.340686 6491 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 20:12:54.340735 6491 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 20:12:54.340791 6491 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 20:12:54.340821 6491 factory.go:656] Stopping watch factory\\\\nI1202 20:12:54.340835 6491 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 20:12:54.340843 6491 ovnkube.go:599] Stopped ovnkube\\\\nI1202 20:12:54.340794 6491 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a2cf28b941d4231e586fdbf9805047b83aa21ad325c180afbee60a6d5227c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T20:13:23Z\\\",\\\"message\\\":\\\"9-gdk6g for pod on switch crc\\\\nI1202 20:13:22.924923 6859 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1202 20:13:22.924760 6859 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-b286j after 0 failed attempt(s)\\\\nI1202 20:13:22.924968 6859 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 20:13:22.924999 6859 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-b286j\\\\nF1202 20:13:22.925013 6859 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T20:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjjqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b286j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.124029 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8p72p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7cf3531-4bca-4b5d-9fa6-e70775605e3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5a44d3bef0785796e52961c2f6031af15cf2f142b34f771283ad3f2d8b8abed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47szf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:12:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8p72p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.142476 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a525e-e8d2-4718-b378-db2a988fd4c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T20:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f232da7c9328c076303d93cdcc696870504e56361a76a1e8c3222978856d6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2996617c5bfa9169fbd57b20c802321da67f8190e58dc4884fce27057448ce84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T20:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T20:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T20:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.150896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.150974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.151001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.151039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.151070 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:24Z","lastTransitionTime":"2025-12-02T20:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.163774 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b634e2ec6fa9c531cec805314972afc064cf0790587894836a857a7c48e66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T20:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.181610 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T20:12:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T20:13:24Z is after 2025-08-24T17:21:41Z" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.254312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.254414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.254437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.254465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.254488 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:24Z","lastTransitionTime":"2025-12-02T20:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.358131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.358225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.358251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.358327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.358353 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:24Z","lastTransitionTime":"2025-12-02T20:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.462481 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.462584 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.462610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.462648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.462672 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:24Z","lastTransitionTime":"2025-12-02T20:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.566429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.566508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.566530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.566559 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.566581 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:24Z","lastTransitionTime":"2025-12-02T20:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.668906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.668972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.668996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.669026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.669046 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:24Z","lastTransitionTime":"2025-12-02T20:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.745623 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/3.log" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.752854 4796 scope.go:117] "RemoveContainer" containerID="f0a2cf28b941d4231e586fdbf9805047b83aa21ad325c180afbee60a6d5227c5" Dec 02 20:13:24 crc kubenswrapper[4796]: E1202 20:13:24.753404 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.772931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.772991 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.773012 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.773039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.773061 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:24Z","lastTransitionTime":"2025-12-02T20:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.826012 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.825976904 podStartE2EDuration="1m9.825976904s" podCreationTimestamp="2025-12-02 20:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:13:24.797563827 +0000 UTC m=+87.800939391" watchObservedRunningTime="2025-12-02 20:13:24.825976904 +0000 UTC m=+87.829352478" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.849512 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.849469739 podStartE2EDuration="1m6.849469739s" podCreationTimestamp="2025-12-02 20:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:13:24.827008629 +0000 UTC m=+87.830384203" watchObservedRunningTime="2025-12-02 20:13:24.849469739 +0000 UTC m=+87.852845303" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.871749 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=36.871725285 podStartE2EDuration="36.871725285s" podCreationTimestamp="2025-12-02 20:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:13:24.849457149 +0000 UTC m=+87.852832723" watchObservedRunningTime="2025-12-02 20:13:24.871725285 +0000 UTC m=+87.875100849" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.876706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.876811 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.876836 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.876873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.876902 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:24Z","lastTransitionTime":"2025-12-02T20:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.963866 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=65.963828982 podStartE2EDuration="1m5.963828982s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:13:24.962992651 +0000 UTC m=+87.966368215" watchObservedRunningTime="2025-12-02 20:13:24.963828982 +0000 UTC m=+87.967204556" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.980636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.980690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.980710 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.980740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.980761 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:24Z","lastTransitionTime":"2025-12-02T20:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:24 crc kubenswrapper[4796]: I1202 20:13:24.984455 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mpjq8" podStartSLOduration=65.984421086 podStartE2EDuration="1m5.984421086s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:13:24.983641987 +0000 UTC m=+87.987017551" watchObservedRunningTime="2025-12-02 20:13:24.984421086 +0000 UTC m=+87.987796660" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.027134 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podStartSLOduration=66.027096882 podStartE2EDuration="1m6.027096882s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:13:25.008015765 +0000 UTC m=+88.011391339" watchObservedRunningTime="2025-12-02 20:13:25.027096882 +0000 UTC m=+88.030472456" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.056991 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.056956753 podStartE2EDuration="23.056956753s" podCreationTimestamp="2025-12-02 20:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:13:25.028491656 +0000 UTC m=+88.031867230" watchObservedRunningTime="2025-12-02 20:13:25.056956753 +0000 UTC m=+88.060332317" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.085178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.085286 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.085317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.085357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.085382 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:25Z","lastTransitionTime":"2025-12-02T20:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.131120 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-m672l" podStartSLOduration=66.131088289 podStartE2EDuration="1m6.131088289s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:13:25.130766372 +0000 UTC m=+88.134142006" watchObservedRunningTime="2025-12-02 20:13:25.131088289 +0000 UTC m=+88.134463853" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.188754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.188810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.188822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.188843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.188856 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:25Z","lastTransitionTime":"2025-12-02T20:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.196414 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8p72p" podStartSLOduration=66.196374529 podStartE2EDuration="1m6.196374529s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:13:25.195372575 +0000 UTC m=+88.198748149" watchObservedRunningTime="2025-12-02 20:13:25.196374529 +0000 UTC m=+88.199750093" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.258324 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mzw77" podStartSLOduration=66.258288826 podStartE2EDuration="1m6.258288826s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:13:25.237825415 +0000 UTC m=+88.241200959" watchObservedRunningTime="2025-12-02 20:13:25.258288826 +0000 UTC m=+88.261664380" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.264040 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:25 crc kubenswrapper[4796]: E1202 20:13:25.264202 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.264491 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:25 crc kubenswrapper[4796]: E1202 20:13:25.264565 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.264604 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:25 crc kubenswrapper[4796]: E1202 20:13:25.264756 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.264806 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:25 crc kubenswrapper[4796]: E1202 20:13:25.265028 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.292223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.292372 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.292390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.292417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.292438 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:25Z","lastTransitionTime":"2025-12-02T20:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.395841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.395900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.395916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.395965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.395983 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:25Z","lastTransitionTime":"2025-12-02T20:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.499363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.499423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.499438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.499461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.499478 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:25Z","lastTransitionTime":"2025-12-02T20:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.602962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.603032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.603057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.603089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.603112 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:25Z","lastTransitionTime":"2025-12-02T20:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.705993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.706053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.706068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.706086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.706099 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:25Z","lastTransitionTime":"2025-12-02T20:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.810062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.810138 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.810224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.810294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.810318 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:25Z","lastTransitionTime":"2025-12-02T20:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.913433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.913523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.913548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.913584 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:25 crc kubenswrapper[4796]: I1202 20:13:25.913646 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:25Z","lastTransitionTime":"2025-12-02T20:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.017307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.017376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.017399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.017435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.017460 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:26Z","lastTransitionTime":"2025-12-02T20:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.121161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.121235 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.121283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.121317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.121348 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:26Z","lastTransitionTime":"2025-12-02T20:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.224799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.224891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.224915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.224949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.224969 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:26Z","lastTransitionTime":"2025-12-02T20:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.328000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.328082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.328112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.328148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.328174 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:26Z","lastTransitionTime":"2025-12-02T20:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.431779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.431849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.431869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.431902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.431921 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:26Z","lastTransitionTime":"2025-12-02T20:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.536056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.536160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.536188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.536224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.536248 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:26Z","lastTransitionTime":"2025-12-02T20:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.640234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.640320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.640334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.640355 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.640369 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:26Z","lastTransitionTime":"2025-12-02T20:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.744441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.744496 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.744510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.744530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.744542 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:26Z","lastTransitionTime":"2025-12-02T20:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.852153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.852288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.852356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.852405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.852452 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:26Z","lastTransitionTime":"2025-12-02T20:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.956203 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.956325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.956347 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.956428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:26 crc kubenswrapper[4796]: I1202 20:13:26.956459 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:26Z","lastTransitionTime":"2025-12-02T20:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.060609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.060670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.060691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.060722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.060743 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:27Z","lastTransitionTime":"2025-12-02T20:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.164033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.164085 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.164099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.164119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.164136 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:27Z","lastTransitionTime":"2025-12-02T20:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.264850 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.264987 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.267577 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.267668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.267691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.267723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.267744 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:27Z","lastTransitionTime":"2025-12-02T20:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:27 crc kubenswrapper[4796]: E1202 20:13:27.267917 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.267997 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.268032 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:27 crc kubenswrapper[4796]: E1202 20:13:27.268104 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:27 crc kubenswrapper[4796]: E1202 20:13:27.268240 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:27 crc kubenswrapper[4796]: E1202 20:13:27.268398 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.371484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.371552 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.371570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.371601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.371626 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:27Z","lastTransitionTime":"2025-12-02T20:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.475392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.475476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.475502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.475536 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.475567 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:27Z","lastTransitionTime":"2025-12-02T20:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.578135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.578201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.578222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.578311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.578334 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:27Z","lastTransitionTime":"2025-12-02T20:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.680950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.681015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.681038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.681068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.681091 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:27Z","lastTransitionTime":"2025-12-02T20:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.784194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.784296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.784322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.784352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.784386 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:27Z","lastTransitionTime":"2025-12-02T20:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.888575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.888643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.888660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.888869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.888890 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:27Z","lastTransitionTime":"2025-12-02T20:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.992456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.992525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.992543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.992569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:27 crc kubenswrapper[4796]: I1202 20:13:27.992591 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:27Z","lastTransitionTime":"2025-12-02T20:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.096396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.096491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.096510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.096541 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.096559 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:28Z","lastTransitionTime":"2025-12-02T20:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.200026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.200306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.200446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.200612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.200769 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:28Z","lastTransitionTime":"2025-12-02T20:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.304068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.304339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.304486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.304611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.304768 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:28Z","lastTransitionTime":"2025-12-02T20:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.409044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.409118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.409139 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.409169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.409191 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:28Z","lastTransitionTime":"2025-12-02T20:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.513397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.513477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.513496 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.513524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.513542 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:28Z","lastTransitionTime":"2025-12-02T20:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.616213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.616285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.616296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.616316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.616681 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:28Z","lastTransitionTime":"2025-12-02T20:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.720115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.720213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.720242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.720326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.720356 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:28Z","lastTransitionTime":"2025-12-02T20:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.823834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.823915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.823933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.823961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.823979 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:28Z","lastTransitionTime":"2025-12-02T20:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.927135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.927213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.927237 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.927309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:28 crc kubenswrapper[4796]: I1202 20:13:28.927333 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:28Z","lastTransitionTime":"2025-12-02T20:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.031297 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.031375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.031400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.031437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.031457 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:29Z","lastTransitionTime":"2025-12-02T20:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.134843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.134901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.134920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.134946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.134965 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:29Z","lastTransitionTime":"2025-12-02T20:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.239359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.239418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.239436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.239457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.239468 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:29Z","lastTransitionTime":"2025-12-02T20:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.264455 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.264525 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:29 crc kubenswrapper[4796]: E1202 20:13:29.264601 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:29 crc kubenswrapper[4796]: E1202 20:13:29.264729 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.264844 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:29 crc kubenswrapper[4796]: E1202 20:13:29.264965 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.265131 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:29 crc kubenswrapper[4796]: E1202 20:13:29.265395 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.343447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.343526 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.343544 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.343576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.343597 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:29Z","lastTransitionTime":"2025-12-02T20:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.446452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.446487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.446497 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.446513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.446525 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:29Z","lastTransitionTime":"2025-12-02T20:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.549430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.549508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.549527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.549555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.549578 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:29Z","lastTransitionTime":"2025-12-02T20:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.652497 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.652565 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.652585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.652613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.652631 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:29Z","lastTransitionTime":"2025-12-02T20:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.755700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.755774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.755796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.755825 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.755848 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:29Z","lastTransitionTime":"2025-12-02T20:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.858417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.858458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.858469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.858484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.858494 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:29Z","lastTransitionTime":"2025-12-02T20:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.961542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.961593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.961609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.961635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:29 crc kubenswrapper[4796]: I1202 20:13:29.961652 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:29Z","lastTransitionTime":"2025-12-02T20:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.063859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.063907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.063918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.063935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.063946 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:30Z","lastTransitionTime":"2025-12-02T20:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.166658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.166730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.166752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.166783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.166809 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:30Z","lastTransitionTime":"2025-12-02T20:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.269596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.269690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.269765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.269807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.269842 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:30Z","lastTransitionTime":"2025-12-02T20:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.372834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.372895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.372913 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.372936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.372953 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:30Z","lastTransitionTime":"2025-12-02T20:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.420826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.420886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.420905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.420927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.420946 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T20:13:30Z","lastTransitionTime":"2025-12-02T20:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.480383 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r7pwk" podStartSLOduration=71.480362271 podStartE2EDuration="1m11.480362271s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:13:25.257694732 +0000 UTC m=+88.261070306" watchObservedRunningTime="2025-12-02 20:13:30.480362271 +0000 UTC m=+93.483737815" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.481472 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7"] Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.481835 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.483936 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.484070 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.485034 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.485514 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.644140 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f4d54849-6155-4463-ac13-f8ed0a05b6df-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.644225 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4d54849-6155-4463-ac13-f8ed0a05b6df-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.644372 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d54849-6155-4463-ac13-f8ed0a05b6df-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.644418 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f4d54849-6155-4463-ac13-f8ed0a05b6df-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.644448 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4d54849-6155-4463-ac13-f8ed0a05b6df-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.745493 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d54849-6155-4463-ac13-f8ed0a05b6df-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.745589 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f4d54849-6155-4463-ac13-f8ed0a05b6df-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.745623 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4d54849-6155-4463-ac13-f8ed0a05b6df-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.745675 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f4d54849-6155-4463-ac13-f8ed0a05b6df-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.745730 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4d54849-6155-4463-ac13-f8ed0a05b6df-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.745739 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f4d54849-6155-4463-ac13-f8ed0a05b6df-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.745790 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f4d54849-6155-4463-ac13-f8ed0a05b6df-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.747488 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4d54849-6155-4463-ac13-f8ed0a05b6df-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.754235 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4d54849-6155-4463-ac13-f8ed0a05b6df-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.777997 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d54849-6155-4463-ac13-f8ed0a05b6df-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x9pg7\" (UID: \"f4d54849-6155-4463-ac13-f8ed0a05b6df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:30 crc kubenswrapper[4796]: I1202 20:13:30.808145 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" Dec 02 20:13:31 crc kubenswrapper[4796]: I1202 20:13:31.264015 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:31 crc kubenswrapper[4796]: I1202 20:13:31.264040 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:31 crc kubenswrapper[4796]: I1202 20:13:31.264063 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:31 crc kubenswrapper[4796]: I1202 20:13:31.264180 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:31 crc kubenswrapper[4796]: E1202 20:13:31.264329 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:31 crc kubenswrapper[4796]: E1202 20:13:31.264551 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:31 crc kubenswrapper[4796]: E1202 20:13:31.264703 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:31 crc kubenswrapper[4796]: E1202 20:13:31.264856 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:31 crc kubenswrapper[4796]: I1202 20:13:31.780139 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" event={"ID":"f4d54849-6155-4463-ac13-f8ed0a05b6df","Type":"ContainerStarted","Data":"954c93113e73040472115c50d166943b7159f40e02032d9c041add10ad085096"} Dec 02 20:13:31 crc kubenswrapper[4796]: I1202 20:13:31.780188 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" event={"ID":"f4d54849-6155-4463-ac13-f8ed0a05b6df","Type":"ContainerStarted","Data":"3be361903a7fa05b481bb6fbb1ce45226d6e280f904b3456758ba5ae84ea5067"} Dec 02 20:13:33 crc kubenswrapper[4796]: I1202 20:13:33.263957 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:33 crc kubenswrapper[4796]: I1202 20:13:33.264025 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:33 crc kubenswrapper[4796]: I1202 20:13:33.264083 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:33 crc kubenswrapper[4796]: E1202 20:13:33.264357 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:33 crc kubenswrapper[4796]: I1202 20:13:33.264438 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:33 crc kubenswrapper[4796]: E1202 20:13:33.264542 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:33 crc kubenswrapper[4796]: E1202 20:13:33.264962 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:33 crc kubenswrapper[4796]: E1202 20:13:33.265180 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:35 crc kubenswrapper[4796]: I1202 20:13:35.264289 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:35 crc kubenswrapper[4796]: I1202 20:13:35.264390 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:35 crc kubenswrapper[4796]: I1202 20:13:35.264473 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:35 crc kubenswrapper[4796]: E1202 20:13:35.264461 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:35 crc kubenswrapper[4796]: E1202 20:13:35.264632 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:35 crc kubenswrapper[4796]: I1202 20:13:35.264698 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:35 crc kubenswrapper[4796]: E1202 20:13:35.265188 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:35 crc kubenswrapper[4796]: E1202 20:13:35.265605 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:37 crc kubenswrapper[4796]: I1202 20:13:37.264469 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:37 crc kubenswrapper[4796]: I1202 20:13:37.264489 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:37 crc kubenswrapper[4796]: I1202 20:13:37.264549 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:37 crc kubenswrapper[4796]: I1202 20:13:37.264608 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:37 crc kubenswrapper[4796]: E1202 20:13:37.266757 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:37 crc kubenswrapper[4796]: E1202 20:13:37.266822 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:37 crc kubenswrapper[4796]: E1202 20:13:37.266913 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:37 crc kubenswrapper[4796]: E1202 20:13:37.267064 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:37 crc kubenswrapper[4796]: I1202 20:13:37.521872 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:37 crc kubenswrapper[4796]: E1202 20:13:37.522308 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:13:37 crc kubenswrapper[4796]: E1202 20:13:37.522496 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs podName:60c1710d-bf66-4687-8ee7-ea828cde5d53 nodeName:}" failed. No retries permitted until 2025-12-02 20:14:41.522454979 +0000 UTC m=+164.525830553 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs") pod "network-metrics-daemon-g7nb5" (UID: "60c1710d-bf66-4687-8ee7-ea828cde5d53") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 20:13:39 crc kubenswrapper[4796]: I1202 20:13:39.264433 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:39 crc kubenswrapper[4796]: I1202 20:13:39.264542 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:39 crc kubenswrapper[4796]: I1202 20:13:39.264642 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:39 crc kubenswrapper[4796]: E1202 20:13:39.264811 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:39 crc kubenswrapper[4796]: I1202 20:13:39.264916 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:39 crc kubenswrapper[4796]: E1202 20:13:39.265109 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:39 crc kubenswrapper[4796]: E1202 20:13:39.265638 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:39 crc kubenswrapper[4796]: E1202 20:13:39.265775 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:39 crc kubenswrapper[4796]: I1202 20:13:39.266066 4796 scope.go:117] "RemoveContainer" containerID="f0a2cf28b941d4231e586fdbf9805047b83aa21ad325c180afbee60a6d5227c5" Dec 02 20:13:39 crc kubenswrapper[4796]: E1202 20:13:39.266348 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" Dec 02 20:13:41 crc kubenswrapper[4796]: I1202 20:13:41.264290 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:41 crc kubenswrapper[4796]: I1202 20:13:41.264370 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:41 crc kubenswrapper[4796]: I1202 20:13:41.264370 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:41 crc kubenswrapper[4796]: I1202 20:13:41.264499 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:41 crc kubenswrapper[4796]: E1202 20:13:41.264511 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:41 crc kubenswrapper[4796]: E1202 20:13:41.264614 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:41 crc kubenswrapper[4796]: E1202 20:13:41.264889 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:41 crc kubenswrapper[4796]: E1202 20:13:41.264943 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:43 crc kubenswrapper[4796]: I1202 20:13:43.264815 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:43 crc kubenswrapper[4796]: I1202 20:13:43.264936 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:43 crc kubenswrapper[4796]: I1202 20:13:43.264828 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:43 crc kubenswrapper[4796]: E1202 20:13:43.265089 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:43 crc kubenswrapper[4796]: I1202 20:13:43.265220 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:43 crc kubenswrapper[4796]: E1202 20:13:43.265573 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:43 crc kubenswrapper[4796]: E1202 20:13:43.265731 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:43 crc kubenswrapper[4796]: E1202 20:13:43.265408 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:45 crc kubenswrapper[4796]: I1202 20:13:45.264488 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:45 crc kubenswrapper[4796]: I1202 20:13:45.264555 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:45 crc kubenswrapper[4796]: I1202 20:13:45.264555 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:45 crc kubenswrapper[4796]: I1202 20:13:45.264673 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:45 crc kubenswrapper[4796]: E1202 20:13:45.265392 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:45 crc kubenswrapper[4796]: E1202 20:13:45.265547 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:45 crc kubenswrapper[4796]: E1202 20:13:45.265697 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:45 crc kubenswrapper[4796]: E1202 20:13:45.265891 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:47 crc kubenswrapper[4796]: I1202 20:13:47.264618 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:47 crc kubenswrapper[4796]: I1202 20:13:47.264721 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:47 crc kubenswrapper[4796]: I1202 20:13:47.264755 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:47 crc kubenswrapper[4796]: E1202 20:13:47.266984 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:47 crc kubenswrapper[4796]: I1202 20:13:47.267027 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:47 crc kubenswrapper[4796]: E1202 20:13:47.267128 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:47 crc kubenswrapper[4796]: E1202 20:13:47.267190 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:47 crc kubenswrapper[4796]: E1202 20:13:47.267223 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:49 crc kubenswrapper[4796]: I1202 20:13:49.264614 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:49 crc kubenswrapper[4796]: I1202 20:13:49.264789 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:49 crc kubenswrapper[4796]: E1202 20:13:49.264857 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:49 crc kubenswrapper[4796]: I1202 20:13:49.264901 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:49 crc kubenswrapper[4796]: I1202 20:13:49.264973 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:49 crc kubenswrapper[4796]: E1202 20:13:49.265108 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:49 crc kubenswrapper[4796]: E1202 20:13:49.265461 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:49 crc kubenswrapper[4796]: E1202 20:13:49.265621 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:51 crc kubenswrapper[4796]: I1202 20:13:51.265090 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:51 crc kubenswrapper[4796]: I1202 20:13:51.265148 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:51 crc kubenswrapper[4796]: I1202 20:13:51.265090 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:51 crc kubenswrapper[4796]: E1202 20:13:51.265375 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:51 crc kubenswrapper[4796]: I1202 20:13:51.265486 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:51 crc kubenswrapper[4796]: E1202 20:13:51.265521 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:51 crc kubenswrapper[4796]: E1202 20:13:51.265818 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:51 crc kubenswrapper[4796]: E1202 20:13:51.265876 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:52 crc kubenswrapper[4796]: I1202 20:13:52.867950 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m672l_03fe6ac0-1095-4336-a25c-4dd0d6e45053/kube-multus/1.log" Dec 02 20:13:52 crc kubenswrapper[4796]: I1202 20:13:52.868792 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m672l_03fe6ac0-1095-4336-a25c-4dd0d6e45053/kube-multus/0.log" Dec 02 20:13:52 crc kubenswrapper[4796]: I1202 20:13:52.868852 4796 generic.go:334] "Generic (PLEG): container finished" podID="03fe6ac0-1095-4336-a25c-4dd0d6e45053" containerID="e45c5c8b97a0407e37a1c8e704004a0752d9143a666d1ed8d5df6e954adc172a" exitCode=1 Dec 02 20:13:52 crc kubenswrapper[4796]: I1202 20:13:52.868910 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m672l" event={"ID":"03fe6ac0-1095-4336-a25c-4dd0d6e45053","Type":"ContainerDied","Data":"e45c5c8b97a0407e37a1c8e704004a0752d9143a666d1ed8d5df6e954adc172a"} Dec 02 20:13:52 crc kubenswrapper[4796]: I1202 20:13:52.868976 4796 scope.go:117] "RemoveContainer" containerID="31287b814c58a714ee9e12931a432d350919edbaae9abdfda56aa0379f08961a" Dec 02 20:13:52 crc kubenswrapper[4796]: I1202 20:13:52.869677 4796 scope.go:117] "RemoveContainer" containerID="e45c5c8b97a0407e37a1c8e704004a0752d9143a666d1ed8d5df6e954adc172a" Dec 02 20:13:52 crc kubenswrapper[4796]: E1202 20:13:52.869946 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-m672l_openshift-multus(03fe6ac0-1095-4336-a25c-4dd0d6e45053)\"" pod="openshift-multus/multus-m672l" podUID="03fe6ac0-1095-4336-a25c-4dd0d6e45053" Dec 02 20:13:52 crc kubenswrapper[4796]: I1202 20:13:52.907984 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pg7" podStartSLOduration=93.907948077 podStartE2EDuration="1m33.907948077s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:13:31.797727139 +0000 UTC m=+94.801102673" watchObservedRunningTime="2025-12-02 20:13:52.907948077 +0000 UTC m=+115.911323651" Dec 02 20:13:53 crc kubenswrapper[4796]: I1202 20:13:53.264941 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:53 crc kubenswrapper[4796]: I1202 20:13:53.265005 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:53 crc kubenswrapper[4796]: E1202 20:13:53.265116 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:53 crc kubenswrapper[4796]: I1202 20:13:53.265314 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:53 crc kubenswrapper[4796]: E1202 20:13:53.265416 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:53 crc kubenswrapper[4796]: E1202 20:13:53.265619 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:53 crc kubenswrapper[4796]: I1202 20:13:53.265652 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:53 crc kubenswrapper[4796]: E1202 20:13:53.265799 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:53 crc kubenswrapper[4796]: I1202 20:13:53.267346 4796 scope.go:117] "RemoveContainer" containerID="f0a2cf28b941d4231e586fdbf9805047b83aa21ad325c180afbee60a6d5227c5" Dec 02 20:13:53 crc kubenswrapper[4796]: E1202 20:13:53.267711 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b286j_openshift-ovn-kubernetes(87a81d4f-9cb5-40b1-93cf-5691b915a68e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" Dec 02 20:13:53 crc kubenswrapper[4796]: I1202 20:13:53.876112 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m672l_03fe6ac0-1095-4336-a25c-4dd0d6e45053/kube-multus/1.log" Dec 02 20:13:55 crc kubenswrapper[4796]: I1202 20:13:55.264798 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:55 crc kubenswrapper[4796]: I1202 20:13:55.264976 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:55 crc kubenswrapper[4796]: E1202 20:13:55.265073 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:55 crc kubenswrapper[4796]: I1202 20:13:55.264824 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:55 crc kubenswrapper[4796]: I1202 20:13:55.264976 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:55 crc kubenswrapper[4796]: E1202 20:13:55.265344 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:55 crc kubenswrapper[4796]: E1202 20:13:55.265591 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:55 crc kubenswrapper[4796]: E1202 20:13:55.265800 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:57 crc kubenswrapper[4796]: I1202 20:13:57.264717 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:57 crc kubenswrapper[4796]: I1202 20:13:57.264778 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:57 crc kubenswrapper[4796]: I1202 20:13:57.264735 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:57 crc kubenswrapper[4796]: I1202 20:13:57.264828 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:57 crc kubenswrapper[4796]: E1202 20:13:57.267856 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:57 crc kubenswrapper[4796]: E1202 20:13:57.267927 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:13:57 crc kubenswrapper[4796]: E1202 20:13:57.268044 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:57 crc kubenswrapper[4796]: E1202 20:13:57.268177 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:57 crc kubenswrapper[4796]: E1202 20:13:57.299594 4796 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 02 20:13:57 crc kubenswrapper[4796]: E1202 20:13:57.344483 4796 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:13:59 crc kubenswrapper[4796]: I1202 20:13:59.265079 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:13:59 crc kubenswrapper[4796]: E1202 20:13:59.266523 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:13:59 crc kubenswrapper[4796]: I1202 20:13:59.265195 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:13:59 crc kubenswrapper[4796]: I1202 20:13:59.265416 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:13:59 crc kubenswrapper[4796]: E1202 20:13:59.266973 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:13:59 crc kubenswrapper[4796]: I1202 20:13:59.265157 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:13:59 crc kubenswrapper[4796]: E1202 20:13:59.267732 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:13:59 crc kubenswrapper[4796]: E1202 20:13:59.267292 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:14:01 crc kubenswrapper[4796]: I1202 20:14:01.264619 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:14:01 crc kubenswrapper[4796]: E1202 20:14:01.264738 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:14:01 crc kubenswrapper[4796]: I1202 20:14:01.264962 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:14:01 crc kubenswrapper[4796]: I1202 20:14:01.265006 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:14:01 crc kubenswrapper[4796]: I1202 20:14:01.264642 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:14:01 crc kubenswrapper[4796]: E1202 20:14:01.265301 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:14:01 crc kubenswrapper[4796]: E1202 20:14:01.265432 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:14:01 crc kubenswrapper[4796]: E1202 20:14:01.265571 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:14:02 crc kubenswrapper[4796]: E1202 20:14:02.347421 4796 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:14:03 crc kubenswrapper[4796]: I1202 20:14:03.264447 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:14:03 crc kubenswrapper[4796]: I1202 20:14:03.264998 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:14:03 crc kubenswrapper[4796]: E1202 20:14:03.265179 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:14:03 crc kubenswrapper[4796]: I1202 20:14:03.265289 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:14:03 crc kubenswrapper[4796]: E1202 20:14:03.265380 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:14:03 crc kubenswrapper[4796]: E1202 20:14:03.265449 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:14:03 crc kubenswrapper[4796]: I1202 20:14:03.265621 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:14:03 crc kubenswrapper[4796]: E1202 20:14:03.265905 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:14:05 crc kubenswrapper[4796]: I1202 20:14:05.264290 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:14:05 crc kubenswrapper[4796]: E1202 20:14:05.264694 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:14:05 crc kubenswrapper[4796]: I1202 20:14:05.264367 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:14:05 crc kubenswrapper[4796]: E1202 20:14:05.264786 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:14:05 crc kubenswrapper[4796]: I1202 20:14:05.264397 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:14:05 crc kubenswrapper[4796]: E1202 20:14:05.264843 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:14:05 crc kubenswrapper[4796]: I1202 20:14:05.264344 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:14:05 crc kubenswrapper[4796]: E1202 20:14:05.264889 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:14:06 crc kubenswrapper[4796]: I1202 20:14:06.266200 4796 scope.go:117] "RemoveContainer" containerID="f0a2cf28b941d4231e586fdbf9805047b83aa21ad325c180afbee60a6d5227c5" Dec 02 20:14:06 crc kubenswrapper[4796]: I1202 20:14:06.939663 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/3.log" Dec 02 20:14:06 crc kubenswrapper[4796]: I1202 20:14:06.944392 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerStarted","Data":"9fc3c8e0b04dacc030172fed3720aecf5315136a65e44ffc91188881db446fcb"} Dec 02 20:14:06 crc kubenswrapper[4796]: I1202 20:14:06.944872 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:14:07 crc kubenswrapper[4796]: I1202 20:14:07.265027 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:14:07 crc kubenswrapper[4796]: E1202 20:14:07.266024 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:14:07 crc kubenswrapper[4796]: I1202 20:14:07.266300 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:14:07 crc kubenswrapper[4796]: I1202 20:14:07.266326 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:14:07 crc kubenswrapper[4796]: I1202 20:14:07.266580 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:14:07 crc kubenswrapper[4796]: I1202 20:14:07.266605 4796 scope.go:117] "RemoveContainer" containerID="e45c5c8b97a0407e37a1c8e704004a0752d9143a666d1ed8d5df6e954adc172a" Dec 02 20:14:07 crc kubenswrapper[4796]: E1202 20:14:07.266759 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:14:07 crc kubenswrapper[4796]: E1202 20:14:07.266934 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:14:07 crc kubenswrapper[4796]: E1202 20:14:07.267061 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:14:07 crc kubenswrapper[4796]: I1202 20:14:07.290110 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podStartSLOduration=108.290075314 podStartE2EDuration="1m48.290075314s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:06.975497515 +0000 UTC m=+129.978873089" watchObservedRunningTime="2025-12-02 20:14:07.290075314 +0000 UTC m=+130.293450878" Dec 02 20:14:07 crc kubenswrapper[4796]: I1202 20:14:07.303062 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-g7nb5"] Dec 02 20:14:07 crc kubenswrapper[4796]: E1202 20:14:07.348020 4796 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:14:07 crc kubenswrapper[4796]: I1202 20:14:07.950733 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m672l_03fe6ac0-1095-4336-a25c-4dd0d6e45053/kube-multus/1.log" Dec 02 20:14:07 crc kubenswrapper[4796]: I1202 20:14:07.950813 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m672l" event={"ID":"03fe6ac0-1095-4336-a25c-4dd0d6e45053","Type":"ContainerStarted","Data":"0e6f41a60d2d2ee7c77df0b01172b537f1cac7f3c8eee3a61006aba7d2721a7f"} Dec 02 20:14:07 crc kubenswrapper[4796]: I1202 20:14:07.950839 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:14:07 crc kubenswrapper[4796]: E1202 20:14:07.951623 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:14:09 crc kubenswrapper[4796]: I1202 20:14:09.264455 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:14:09 crc kubenswrapper[4796]: I1202 20:14:09.264548 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:14:09 crc kubenswrapper[4796]: I1202 20:14:09.264636 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:14:09 crc kubenswrapper[4796]: E1202 20:14:09.264633 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:14:09 crc kubenswrapper[4796]: E1202 20:14:09.264724 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:14:09 crc kubenswrapper[4796]: E1202 20:14:09.265018 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:14:09 crc kubenswrapper[4796]: I1202 20:14:09.265052 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:14:09 crc kubenswrapper[4796]: E1202 20:14:09.265297 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:14:11 crc kubenswrapper[4796]: I1202 20:14:11.264148 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:14:11 crc kubenswrapper[4796]: E1202 20:14:11.264934 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 20:14:11 crc kubenswrapper[4796]: I1202 20:14:11.264217 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:14:11 crc kubenswrapper[4796]: E1202 20:14:11.265059 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g7nb5" podUID="60c1710d-bf66-4687-8ee7-ea828cde5d53" Dec 02 20:14:11 crc kubenswrapper[4796]: I1202 20:14:11.264278 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:14:11 crc kubenswrapper[4796]: E1202 20:14:11.265125 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 20:14:11 crc kubenswrapper[4796]: I1202 20:14:11.264176 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:14:11 crc kubenswrapper[4796]: E1202 20:14:11.265224 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 20:14:13 crc kubenswrapper[4796]: I1202 20:14:13.264450 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:14:13 crc kubenswrapper[4796]: I1202 20:14:13.264524 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:14:13 crc kubenswrapper[4796]: I1202 20:14:13.264566 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:14:13 crc kubenswrapper[4796]: I1202 20:14:13.266057 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:14:13 crc kubenswrapper[4796]: I1202 20:14:13.267971 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 02 20:14:13 crc kubenswrapper[4796]: I1202 20:14:13.269172 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 02 20:14:13 crc kubenswrapper[4796]: I1202 20:14:13.270435 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 02 20:14:13 crc kubenswrapper[4796]: I1202 20:14:13.270647 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 02 20:14:13 crc kubenswrapper[4796]: I1202 20:14:13.270880 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 02 20:14:13 crc kubenswrapper[4796]: I1202 20:14:13.271031 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.587145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.638845 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zjrkr"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.639754 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.643022 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mvqcs"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.644069 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mvqcs" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.644383 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.644974 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.650759 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.654327 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.665679 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.666236 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.666853 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.667207 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.667373 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.667395 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.667214 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.667657 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kgf8g"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.667810 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.667982 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.668274 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.667986 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.668544 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.668611 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.668423 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.668932 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p6qqx"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.669695 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.671237 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.671397 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.671710 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.671736 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.672156 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.672420 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.673564 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.673957 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ztcsx"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.674566 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.675275 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.676550 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-8jxpl"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.677020 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6tx4j"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.677367 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.677386 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.677461 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8jxpl" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.677845 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.679330 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rmbh4"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.679845 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.680142 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.680603 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.683938 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-js54s"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.684457 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.684778 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.685212 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.685592 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.685817 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.685966 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.686744 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.686755 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.687319 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.688031 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hn5qp"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.688455 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.694311 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.694894 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.695502 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.695765 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.695980 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.696304 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.696530 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.696851 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.697053 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.697720 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.698606 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.698776 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.699073 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.699237 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.699422 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.699597 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.699740 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.700085 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.700364 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.700595 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.700700 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.707969 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.708021 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2deb1633-420a-4e19-bf12-0c853eb1da21-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ctm9n\" (UID: \"2deb1633-420a-4e19-bf12-0c853eb1da21\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.708048 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9m9r\" (UniqueName: \"kubernetes.io/projected/2deb1633-420a-4e19-bf12-0c853eb1da21-kube-api-access-n9m9r\") pod \"cluster-samples-operator-665b6dd947-ctm9n\" (UID: \"2deb1633-420a-4e19-bf12-0c853eb1da21\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.708080 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ceb7b1c-b699-46ed-9571-0dd1fe3dce69-metrics-tls\") pod \"dns-operator-744455d44c-mvqcs\" (UID: \"3ceb7b1c-b699-46ed-9571-0dd1fe3dce69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mvqcs" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.708106 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c8abe1-552e-404c-be1f-88f30e467d8f-serving-cert\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.708131 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlpk7\" (UniqueName: \"kubernetes.io/projected/3ceb7b1c-b699-46ed-9571-0dd1fe3dce69-kube-api-access-dlpk7\") pod \"dns-operator-744455d44c-mvqcs\" (UID: \"3ceb7b1c-b699-46ed-9571-0dd1fe3dce69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mvqcs" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.708215 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-client-ca\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.708275 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jndnc\" (UniqueName: \"kubernetes.io/projected/a3c8abe1-552e-404c-be1f-88f30e467d8f-kube-api-access-jndnc\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.708311 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-config\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.709393 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rgp7h"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.711676 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.717471 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.719454 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.741686 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.742502 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.742635 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.742767 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.742901 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.744293 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-r5ddg"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.744945 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.742682 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.745403 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.745584 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.745685 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.745753 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.745927 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.746039 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.746187 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.746471 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.746639 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.746657 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.746896 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-r5ddg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.746965 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.747064 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.745583 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.747534 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.747644 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.747705 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.747902 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.748191 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.748329 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.748334 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.748425 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.748441 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.748462 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.748520 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.748637 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.748810 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.748863 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.748914 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.749056 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.749122 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.749159 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.749182 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.749280 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.749350 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.749392 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.749452 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.749518 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.749651 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.749725 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.749851 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.750391 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.755168 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.757392 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.757626 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.758088 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.758199 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.758303 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.758385 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.758511 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.758606 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.758711 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.758819 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.758931 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.759064 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.759080 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.759109 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.763657 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.764024 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.766192 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.766687 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.768000 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.768603 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.768961 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.769175 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.770027 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.770344 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fpclt"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.770804 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fpclt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.771297 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.771812 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.772097 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.776121 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.792008 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8nh9j"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.792873 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.794144 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.794664 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.794955 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.795115 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.798941 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sfw6z"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.807462 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.807561 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.809628 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810108 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c77d403a-df91-47fe-bc4d-486dd5f7142f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gcsc8\" (UID: \"c77d403a-df91-47fe-bc4d-486dd5f7142f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810140 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd52c61-04b7-424a-88c0-71653fd8d65e-config\") pod \"route-controller-manager-6576b87f9c-lsdcg\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810158 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810185 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e268554-349b-41e2-a9ec-64b6de541ace-config\") pod \"console-operator-58897d9998-ztcsx\" (UID: \"5e268554-349b-41e2-a9ec-64b6de541ace\") " pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810208 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-config\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd52c61-04b7-424a-88c0-71653fd8d65e-serving-cert\") pod \"route-controller-manager-6576b87f9c-lsdcg\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810240 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810287 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c77d403a-df91-47fe-bc4d-486dd5f7142f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gcsc8\" (UID: \"c77d403a-df91-47fe-bc4d-486dd5f7142f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810305 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810323 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-node-pullsecrets\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810340 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32ceedc0-31dc-4816-bf1c-30b63b08e98a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bprtg\" (UID: \"32ceedc0-31dc-4816-bf1c-30b63b08e98a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810359 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810378 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvmmz\" (UniqueName: \"kubernetes.io/projected/a986dc8a-d877-4144-9ed4-6e30cff96787-kube-api-access-bvmmz\") pod \"catalog-operator-68c6474976-pmh58\" (UID: \"a986dc8a-d877-4144-9ed4-6e30cff96787\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810399 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810416 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-audit\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810433 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9525db23-79d5-4936-b9a3-7af09b979ad6-config\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810451 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mz7v\" (UniqueName: \"kubernetes.io/projected/9525db23-79d5-4936-b9a3-7af09b979ad6-kube-api-access-4mz7v\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810472 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/32ceedc0-31dc-4816-bf1c-30b63b08e98a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bprtg\" (UID: \"32ceedc0-31dc-4816-bf1c-30b63b08e98a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810492 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a986dc8a-d877-4144-9ed4-6e30cff96787-profile-collector-cert\") pod \"catalog-operator-68c6474976-pmh58\" (UID: \"a986dc8a-d877-4144-9ed4-6e30cff96787\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810513 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2deb1633-420a-4e19-bf12-0c853eb1da21-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ctm9n\" (UID: \"2deb1633-420a-4e19-bf12-0c853eb1da21\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810530 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ceb7b1c-b699-46ed-9571-0dd1fe3dce69-metrics-tls\") pod \"dns-operator-744455d44c-mvqcs\" (UID: \"3ceb7b1c-b699-46ed-9571-0dd1fe3dce69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mvqcs" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810587 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810609 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f87b13cb-58d7-402e-b4cc-0206862019dd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmzq8\" (UID: \"f87b13cb-58d7-402e-b4cc-0206862019dd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810627 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c8abe1-552e-404c-be1f-88f30e467d8f-serving-cert\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810645 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qpxk\" (UniqueName: \"kubernetes.io/projected/f87b13cb-58d7-402e-b4cc-0206862019dd-kube-api-access-4qpxk\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmzq8\" (UID: \"f87b13cb-58d7-402e-b4cc-0206862019dd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810664 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9775f308-f5d8-4bc7-bfc0-00f065833a55-service-ca-bundle\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810680 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-config\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810697 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7fjz\" (UniqueName: \"kubernetes.io/projected/32ceedc0-31dc-4816-bf1c-30b63b08e98a-kube-api-access-k7fjz\") pod \"cluster-image-registry-operator-dc59b4c8b-bprtg\" (UID: \"32ceedc0-31dc-4816-bf1c-30b63b08e98a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810715 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38aa3029-8a07-415a-9da2-9a0dca76fdb0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nq9r\" (UID: \"38aa3029-8a07-415a-9da2-9a0dca76fdb0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810731 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-image-import-ca\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810747 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnw2v\" (UniqueName: \"kubernetes.io/projected/21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8-kube-api-access-qnw2v\") pod \"openshift-config-operator-7777fb866f-jfb6p\" (UID: \"21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810766 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlpk7\" (UniqueName: \"kubernetes.io/projected/3ceb7b1c-b699-46ed-9571-0dd1fe3dce69-kube-api-access-dlpk7\") pod \"dns-operator-744455d44c-mvqcs\" (UID: \"3ceb7b1c-b699-46ed-9571-0dd1fe3dce69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mvqcs" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810788 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8f831a3-1114-43e1-b10b-d5b31905aa9e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qfgsc\" (UID: \"f8f831a3-1114-43e1-b10b-d5b31905aa9e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810807 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9mhn\" (UniqueName: \"kubernetes.io/projected/d39a70e8-5d53-431a-8413-bbb2041fc8dd-kube-api-access-v9mhn\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810834 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d7de865-36cb-4d66-b97f-4e73717ea4a9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hj4jh\" (UID: \"8d7de865-36cb-4d66-b97f-4e73717ea4a9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810850 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-client-ca\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810867 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-encryption-config\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810886 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jndnc\" (UniqueName: \"kubernetes.io/projected/a3c8abe1-552e-404c-be1f-88f30e467d8f-kube-api-access-jndnc\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810911 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810927 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d7de865-36cb-4d66-b97f-4e73717ea4a9-proxy-tls\") pod \"machine-config-controller-84d6567774-hj4jh\" (UID: \"8d7de865-36cb-4d66-b97f-4e73717ea4a9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810944 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-etcd-client\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810960 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-etcd-client\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810977 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-serving-cert\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.810993 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-serving-cert\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811009 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-encryption-config\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811023 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-audit-dir\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811045 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d39a70e8-5d53-431a-8413-bbb2041fc8dd-audit-dir\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811060 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811077 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h99sr\" (UniqueName: \"kubernetes.io/projected/92175868-fbbe-4efc-824d-9af2cd73b853-kube-api-access-h99sr\") pod \"machine-approver-56656f9798-s22mc\" (UID: \"92175868-fbbe-4efc-824d-9af2cd73b853\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811092 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811109 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-audit-policies\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811123 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9525db23-79d5-4936-b9a3-7af09b979ad6-service-ca-bundle\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811138 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e268554-349b-41e2-a9ec-64b6de541ace-serving-cert\") pod \"console-operator-58897d9998-ztcsx\" (UID: \"5e268554-349b-41e2-a9ec-64b6de541ace\") " pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811154 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8-serving-cert\") pod \"openshift-config-operator-7777fb866f-jfb6p\" (UID: \"21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811172 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811191 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811210 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9525db23-79d5-4936-b9a3-7af09b979ad6-serving-cert\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811224 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-audit-dir\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811241 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jfb6p\" (UID: \"21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811279 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjzl5\" (UniqueName: \"kubernetes.io/projected/c77d403a-df91-47fe-bc4d-486dd5f7142f-kube-api-access-wjzl5\") pod \"openshift-controller-manager-operator-756b6f6bc6-gcsc8\" (UID: \"c77d403a-df91-47fe-bc4d-486dd5f7142f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811301 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9775f308-f5d8-4bc7-bfc0-00f065833a55-default-certificate\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811318 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38aa3029-8a07-415a-9da2-9a0dca76fdb0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nq9r\" (UID: \"38aa3029-8a07-415a-9da2-9a0dca76fdb0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811371 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l4mj\" (UniqueName: \"kubernetes.io/projected/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-kube-api-access-6l4mj\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811390 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b13cb-58d7-402e-b4cc-0206862019dd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmzq8\" (UID: \"f87b13cb-58d7-402e-b4cc-0206862019dd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811404 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e268554-349b-41e2-a9ec-64b6de541ace-trusted-ca\") pod \"console-operator-58897d9998-ztcsx\" (UID: \"5e268554-349b-41e2-a9ec-64b6de541ace\") " pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811474 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9525db23-79d5-4936-b9a3-7af09b979ad6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811492 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811540 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9m9r\" (UniqueName: \"kubernetes.io/projected/2deb1633-420a-4e19-bf12-0c853eb1da21-kube-api-access-n9m9r\") pod \"cluster-samples-operator-665b6dd947-ctm9n\" (UID: \"2deb1633-420a-4e19-bf12-0c853eb1da21\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811556 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32ceedc0-31dc-4816-bf1c-30b63b08e98a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bprtg\" (UID: \"32ceedc0-31dc-4816-bf1c-30b63b08e98a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811573 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-audit-policies\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811608 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5wbr\" (UniqueName: \"kubernetes.io/projected/bbd52c61-04b7-424a-88c0-71653fd8d65e-kube-api-access-x5wbr\") pod \"route-controller-manager-6576b87f9c-lsdcg\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811624 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9775f308-f5d8-4bc7-bfc0-00f065833a55-metrics-certs\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811639 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-etcd-serving-ca\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811655 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f831a3-1114-43e1-b10b-d5b31905aa9e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qfgsc\" (UID: \"f8f831a3-1114-43e1-b10b-d5b31905aa9e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811693 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbd52c61-04b7-424a-88c0-71653fd8d65e-client-ca\") pod \"route-controller-manager-6576b87f9c-lsdcg\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811712 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr45n\" (UniqueName: \"kubernetes.io/projected/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-kube-api-access-dr45n\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811757 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brrnz\" (UniqueName: \"kubernetes.io/projected/5e268554-349b-41e2-a9ec-64b6de541ace-kube-api-access-brrnz\") pod \"console-operator-58897d9998-ztcsx\" (UID: \"5e268554-349b-41e2-a9ec-64b6de541ace\") " pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811775 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbvcl\" (UniqueName: \"kubernetes.io/projected/9775f308-f5d8-4bc7-bfc0-00f065833a55-kube-api-access-lbvcl\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811790 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a986dc8a-d877-4144-9ed4-6e30cff96787-srv-cert\") pod \"catalog-operator-68c6474976-pmh58\" (UID: \"a986dc8a-d877-4144-9ed4-6e30cff96787\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811835 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92175868-fbbe-4efc-824d-9af2cd73b853-machine-approver-tls\") pod \"machine-approver-56656f9798-s22mc\" (UID: \"92175868-fbbe-4efc-824d-9af2cd73b853\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811855 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92175868-fbbe-4efc-824d-9af2cd73b853-config\") pod \"machine-approver-56656f9798-s22mc\" (UID: \"92175868-fbbe-4efc-824d-9af2cd73b853\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811872 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f831a3-1114-43e1-b10b-d5b31905aa9e-config\") pod \"kube-controller-manager-operator-78b949d7b-qfgsc\" (UID: \"f8f831a3-1114-43e1-b10b-d5b31905aa9e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811922 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.811947 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.812003 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.812024 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38aa3029-8a07-415a-9da2-9a0dca76fdb0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nq9r\" (UID: \"38aa3029-8a07-415a-9da2-9a0dca76fdb0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.812065 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92175868-fbbe-4efc-824d-9af2cd73b853-auth-proxy-config\") pod \"machine-approver-56656f9798-s22mc\" (UID: \"92175868-fbbe-4efc-824d-9af2cd73b853\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.812088 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9775f308-f5d8-4bc7-bfc0-00f065833a55-stats-auth\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.812108 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgmlz\" (UniqueName: \"kubernetes.io/projected/8d7de865-36cb-4d66-b97f-4e73717ea4a9-kube-api-access-bgmlz\") pod \"machine-config-controller-84d6567774-hj4jh\" (UID: \"8d7de865-36cb-4d66-b97f-4e73717ea4a9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.813422 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-config\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.815065 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-client-ca\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.816867 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.820070 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2deb1633-420a-4e19-bf12-0c853eb1da21-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ctm9n\" (UID: \"2deb1633-420a-4e19-bf12-0c853eb1da21\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.822537 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.822670 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.823668 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ceb7b1c-b699-46ed-9571-0dd1fe3dce69-metrics-tls\") pod \"dns-operator-744455d44c-mvqcs\" (UID: \"3ceb7b1c-b699-46ed-9571-0dd1fe3dce69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mvqcs" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.825424 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.825633 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.826512 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-54d99"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.826698 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.827317 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjf62"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.827570 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.828330 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.828533 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.829815 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zjrkr"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.829868 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.829883 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mvqcs"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.829895 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.829907 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-mfxpp"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.830114 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.831167 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.831195 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p6qqx"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.831207 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.831220 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.831294 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mfxpp" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.833645 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rgp7h"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.836245 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c8abe1-552e-404c-be1f-88f30e467d8f-serving-cert\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.836836 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.839717 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.839748 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8jxpl"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.840935 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.842431 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sfw6z"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.842647 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.844004 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kgf8g"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.845822 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ztcsx"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.847348 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.848466 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.849809 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rmbh4"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.851366 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6tx4j"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.852461 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.854901 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-r5ddg"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.856531 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.856610 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.861687 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.862052 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.863241 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-js54s"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.864326 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.865662 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.867438 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8nh9j"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.869117 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.870010 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.870950 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-slnp6"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.871858 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.873247 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fpclt"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.874405 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.877104 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ptmbv"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.881506 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.881653 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ptmbv" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.882993 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.888015 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-54d99"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.891245 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjf62"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.892974 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-slnp6"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.894505 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mfxpp"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.895553 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tjstr"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.897529 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tjstr"] Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.897905 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.902664 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913106 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9525db23-79d5-4936-b9a3-7af09b979ad6-config\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913152 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mz7v\" (UniqueName: \"kubernetes.io/projected/9525db23-79d5-4936-b9a3-7af09b979ad6-kube-api-access-4mz7v\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913185 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/32ceedc0-31dc-4816-bf1c-30b63b08e98a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bprtg\" (UID: \"32ceedc0-31dc-4816-bf1c-30b63b08e98a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913207 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a986dc8a-d877-4144-9ed4-6e30cff96787-profile-collector-cert\") pod \"catalog-operator-68c6474976-pmh58\" (UID: \"a986dc8a-d877-4144-9ed4-6e30cff96787\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913229 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f87b13cb-58d7-402e-b4cc-0206862019dd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmzq8\" (UID: \"f87b13cb-58d7-402e-b4cc-0206862019dd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913268 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qpxk\" (UniqueName: \"kubernetes.io/projected/f87b13cb-58d7-402e-b4cc-0206862019dd-kube-api-access-4qpxk\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmzq8\" (UID: \"f87b13cb-58d7-402e-b4cc-0206862019dd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913298 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7fjz\" (UniqueName: \"kubernetes.io/projected/32ceedc0-31dc-4816-bf1c-30b63b08e98a-kube-api-access-k7fjz\") pod \"cluster-image-registry-operator-dc59b4c8b-bprtg\" (UID: \"32ceedc0-31dc-4816-bf1c-30b63b08e98a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913319 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38aa3029-8a07-415a-9da2-9a0dca76fdb0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nq9r\" (UID: \"38aa3029-8a07-415a-9da2-9a0dca76fdb0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913339 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9775f308-f5d8-4bc7-bfc0-00f065833a55-service-ca-bundle\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913364 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-config\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913401 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-image-import-ca\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913423 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnw2v\" (UniqueName: \"kubernetes.io/projected/21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8-kube-api-access-qnw2v\") pod \"openshift-config-operator-7777fb866f-jfb6p\" (UID: \"21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913449 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8f831a3-1114-43e1-b10b-d5b31905aa9e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qfgsc\" (UID: \"f8f831a3-1114-43e1-b10b-d5b31905aa9e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913471 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9mhn\" (UniqueName: \"kubernetes.io/projected/d39a70e8-5d53-431a-8413-bbb2041fc8dd-kube-api-access-v9mhn\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913494 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d7de865-36cb-4d66-b97f-4e73717ea4a9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hj4jh\" (UID: \"8d7de865-36cb-4d66-b97f-4e73717ea4a9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913521 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-encryption-config\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913554 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d7de865-36cb-4d66-b97f-4e73717ea4a9-proxy-tls\") pod \"machine-config-controller-84d6567774-hj4jh\" (UID: \"8d7de865-36cb-4d66-b97f-4e73717ea4a9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913575 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-etcd-client\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913591 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-etcd-client\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913607 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-serving-cert\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913626 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913650 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d39a70e8-5d53-431a-8413-bbb2041fc8dd-audit-dir\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913691 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-serving-cert\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913711 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-encryption-config\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913729 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-audit-dir\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913748 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h99sr\" (UniqueName: \"kubernetes.io/projected/92175868-fbbe-4efc-824d-9af2cd73b853-kube-api-access-h99sr\") pod \"machine-approver-56656f9798-s22mc\" (UID: \"92175868-fbbe-4efc-824d-9af2cd73b853\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913769 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913788 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-audit-policies\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913817 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9525db23-79d5-4936-b9a3-7af09b979ad6-service-ca-bundle\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913839 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e268554-349b-41e2-a9ec-64b6de541ace-serving-cert\") pod \"console-operator-58897d9998-ztcsx\" (UID: \"5e268554-349b-41e2-a9ec-64b6de541ace\") " pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913891 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913912 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913928 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8-serving-cert\") pod \"openshift-config-operator-7777fb866f-jfb6p\" (UID: \"21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913949 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjzl5\" (UniqueName: \"kubernetes.io/projected/c77d403a-df91-47fe-bc4d-486dd5f7142f-kube-api-access-wjzl5\") pod \"openshift-controller-manager-operator-756b6f6bc6-gcsc8\" (UID: \"c77d403a-df91-47fe-bc4d-486dd5f7142f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913968 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9775f308-f5d8-4bc7-bfc0-00f065833a55-default-certificate\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.913988 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9525db23-79d5-4936-b9a3-7af09b979ad6-serving-cert\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914004 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-audit-dir\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914021 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jfb6p\" (UID: \"21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914040 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38aa3029-8a07-415a-9da2-9a0dca76fdb0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nq9r\" (UID: \"38aa3029-8a07-415a-9da2-9a0dca76fdb0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914065 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l4mj\" (UniqueName: \"kubernetes.io/projected/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-kube-api-access-6l4mj\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914087 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b13cb-58d7-402e-b4cc-0206862019dd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmzq8\" (UID: \"f87b13cb-58d7-402e-b4cc-0206862019dd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914113 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9525db23-79d5-4936-b9a3-7af09b979ad6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914131 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e268554-349b-41e2-a9ec-64b6de541ace-trusted-ca\") pod \"console-operator-58897d9998-ztcsx\" (UID: \"5e268554-349b-41e2-a9ec-64b6de541ace\") " pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914162 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32ceedc0-31dc-4816-bf1c-30b63b08e98a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bprtg\" (UID: \"32ceedc0-31dc-4816-bf1c-30b63b08e98a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914183 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914234 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-audit-policies\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914230 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d39a70e8-5d53-431a-8413-bbb2041fc8dd-audit-dir\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914273 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5wbr\" (UniqueName: \"kubernetes.io/projected/bbd52c61-04b7-424a-88c0-71653fd8d65e-kube-api-access-x5wbr\") pod \"route-controller-manager-6576b87f9c-lsdcg\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914329 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-audit-dir\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914361 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9775f308-f5d8-4bc7-bfc0-00f065833a55-metrics-certs\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914396 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-etcd-serving-ca\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914427 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f831a3-1114-43e1-b10b-d5b31905aa9e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qfgsc\" (UID: \"f8f831a3-1114-43e1-b10b-d5b31905aa9e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914452 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbd52c61-04b7-424a-88c0-71653fd8d65e-client-ca\") pod \"route-controller-manager-6576b87f9c-lsdcg\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914481 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr45n\" (UniqueName: \"kubernetes.io/projected/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-kube-api-access-dr45n\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914506 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brrnz\" (UniqueName: \"kubernetes.io/projected/5e268554-349b-41e2-a9ec-64b6de541ace-kube-api-access-brrnz\") pod \"console-operator-58897d9998-ztcsx\" (UID: \"5e268554-349b-41e2-a9ec-64b6de541ace\") " pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914536 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92175868-fbbe-4efc-824d-9af2cd73b853-machine-approver-tls\") pod \"machine-approver-56656f9798-s22mc\" (UID: \"92175868-fbbe-4efc-824d-9af2cd73b853\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914563 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92175868-fbbe-4efc-824d-9af2cd73b853-config\") pod \"machine-approver-56656f9798-s22mc\" (UID: \"92175868-fbbe-4efc-824d-9af2cd73b853\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914587 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f831a3-1114-43e1-b10b-d5b31905aa9e-config\") pod \"kube-controller-manager-operator-78b949d7b-qfgsc\" (UID: \"f8f831a3-1114-43e1-b10b-d5b31905aa9e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914618 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914645 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbvcl\" (UniqueName: \"kubernetes.io/projected/9775f308-f5d8-4bc7-bfc0-00f065833a55-kube-api-access-lbvcl\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a986dc8a-d877-4144-9ed4-6e30cff96787-srv-cert\") pod \"catalog-operator-68c6474976-pmh58\" (UID: \"a986dc8a-d877-4144-9ed4-6e30cff96787\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914706 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914743 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914776 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38aa3029-8a07-415a-9da2-9a0dca76fdb0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nq9r\" (UID: \"38aa3029-8a07-415a-9da2-9a0dca76fdb0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914809 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92175868-fbbe-4efc-824d-9af2cd73b853-auth-proxy-config\") pod \"machine-approver-56656f9798-s22mc\" (UID: \"92175868-fbbe-4efc-824d-9af2cd73b853\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914836 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9775f308-f5d8-4bc7-bfc0-00f065833a55-stats-auth\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914872 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgmlz\" (UniqueName: \"kubernetes.io/projected/8d7de865-36cb-4d66-b97f-4e73717ea4a9-kube-api-access-bgmlz\") pod \"machine-config-controller-84d6567774-hj4jh\" (UID: \"8d7de865-36cb-4d66-b97f-4e73717ea4a9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914904 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c77d403a-df91-47fe-bc4d-486dd5f7142f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gcsc8\" (UID: \"c77d403a-df91-47fe-bc4d-486dd5f7142f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914929 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd52c61-04b7-424a-88c0-71653fd8d65e-config\") pod \"route-controller-manager-6576b87f9c-lsdcg\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914956 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914987 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e268554-349b-41e2-a9ec-64b6de541ace-config\") pod \"console-operator-58897d9998-ztcsx\" (UID: \"5e268554-349b-41e2-a9ec-64b6de541ace\") " pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.915015 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd52c61-04b7-424a-88c0-71653fd8d65e-serving-cert\") pod \"route-controller-manager-6576b87f9c-lsdcg\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.915040 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.915340 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9525db23-79d5-4936-b9a3-7af09b979ad6-config\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.915379 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c77d403a-df91-47fe-bc4d-486dd5f7142f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gcsc8\" (UID: \"c77d403a-df91-47fe-bc4d-486dd5f7142f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.915438 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.915449 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.915471 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32ceedc0-31dc-4816-bf1c-30b63b08e98a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bprtg\" (UID: \"32ceedc0-31dc-4816-bf1c-30b63b08e98a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.915514 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.915546 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvmmz\" (UniqueName: \"kubernetes.io/projected/a986dc8a-d877-4144-9ed4-6e30cff96787-kube-api-access-bvmmz\") pod \"catalog-operator-68c6474976-pmh58\" (UID: \"a986dc8a-d877-4144-9ed4-6e30cff96787\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.915565 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-node-pullsecrets\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.915594 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.915621 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-audit\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.916409 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.916646 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d7de865-36cb-4d66-b97f-4e73717ea4a9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hj4jh\" (UID: \"8d7de865-36cb-4d66-b97f-4e73717ea4a9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.916645 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-audit-policies\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.916675 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-image-import-ca\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.916720 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-audit\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.914394 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-audit-dir\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.916786 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-config\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.916930 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.918067 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbd52c61-04b7-424a-88c0-71653fd8d65e-client-ca\") pod \"route-controller-manager-6576b87f9c-lsdcg\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.918246 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9525db23-79d5-4936-b9a3-7af09b979ad6-service-ca-bundle\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.918353 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f831a3-1114-43e1-b10b-d5b31905aa9e-config\") pod \"kube-controller-manager-operator-78b949d7b-qfgsc\" (UID: \"f8f831a3-1114-43e1-b10b-d5b31905aa9e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.918392 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-node-pullsecrets\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.918530 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9525db23-79d5-4936-b9a3-7af09b979ad6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.918838 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92175868-fbbe-4efc-824d-9af2cd73b853-auth-proxy-config\") pod \"machine-approver-56656f9798-s22mc\" (UID: \"92175868-fbbe-4efc-824d-9af2cd73b853\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.919143 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jfb6p\" (UID: \"21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.919783 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92175868-fbbe-4efc-824d-9af2cd73b853-config\") pod \"machine-approver-56656f9798-s22mc\" (UID: \"92175868-fbbe-4efc-824d-9af2cd73b853\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.920073 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd52c61-04b7-424a-88c0-71653fd8d65e-config\") pod \"route-controller-manager-6576b87f9c-lsdcg\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.920923 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-audit-policies\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.921294 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e268554-349b-41e2-a9ec-64b6de541ace-config\") pod \"console-operator-58897d9998-ztcsx\" (UID: \"5e268554-349b-41e2-a9ec-64b6de541ace\") " pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.921312 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.921377 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.921737 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e268554-349b-41e2-a9ec-64b6de541ace-trusted-ca\") pod \"console-operator-58897d9998-ztcsx\" (UID: \"5e268554-349b-41e2-a9ec-64b6de541ace\") " pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.922036 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-etcd-serving-ca\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.922404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.923364 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.923975 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-serving-cert\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.924489 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.924539 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.924586 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-etcd-client\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.924651 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-encryption-config\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.924682 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e268554-349b-41e2-a9ec-64b6de541ace-serving-cert\") pod \"console-operator-58897d9998-ztcsx\" (UID: \"5e268554-349b-41e2-a9ec-64b6de541ace\") " pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.924742 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.925079 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9525db23-79d5-4936-b9a3-7af09b979ad6-serving-cert\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.925192 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.925292 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f831a3-1114-43e1-b10b-d5b31905aa9e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qfgsc\" (UID: \"f8f831a3-1114-43e1-b10b-d5b31905aa9e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.925485 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8-serving-cert\") pod \"openshift-config-operator-7777fb866f-jfb6p\" (UID: \"21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.925815 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.925984 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38aa3029-8a07-415a-9da2-9a0dca76fdb0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nq9r\" (UID: \"38aa3029-8a07-415a-9da2-9a0dca76fdb0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.926299 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-serving-cert\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.926329 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.926471 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.926620 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-encryption-config\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.926728 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-etcd-client\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.927318 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd52c61-04b7-424a-88c0-71653fd8d65e-serving-cert\") pod \"route-controller-manager-6576b87f9c-lsdcg\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.927360 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92175868-fbbe-4efc-824d-9af2cd73b853-machine-approver-tls\") pod \"machine-approver-56656f9798-s22mc\" (UID: \"92175868-fbbe-4efc-824d-9af2cd73b853\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.932814 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38aa3029-8a07-415a-9da2-9a0dca76fdb0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nq9r\" (UID: \"38aa3029-8a07-415a-9da2-9a0dca76fdb0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.941564 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.953302 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.962080 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32ceedc0-31dc-4816-bf1c-30b63b08e98a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bprtg\" (UID: \"32ceedc0-31dc-4816-bf1c-30b63b08e98a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.962138 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.982599 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 02 20:14:21 crc kubenswrapper[4796]: I1202 20:14:21.987736 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/32ceedc0-31dc-4816-bf1c-30b63b08e98a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bprtg\" (UID: \"32ceedc0-31dc-4816-bf1c-30b63b08e98a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.002991 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.023304 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.043572 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.055640 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a986dc8a-d877-4144-9ed4-6e30cff96787-srv-cert\") pod \"catalog-operator-68c6474976-pmh58\" (UID: \"a986dc8a-d877-4144-9ed4-6e30cff96787\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.063617 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.076794 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a986dc8a-d877-4144-9ed4-6e30cff96787-profile-collector-cert\") pod \"catalog-operator-68c6474976-pmh58\" (UID: \"a986dc8a-d877-4144-9ed4-6e30cff96787\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.083218 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.103024 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.123647 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.143592 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.163362 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.173167 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9775f308-f5d8-4bc7-bfc0-00f065833a55-metrics-certs\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.182933 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.195817 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9775f308-f5d8-4bc7-bfc0-00f065833a55-default-certificate\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.201716 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.204469 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9775f308-f5d8-4bc7-bfc0-00f065833a55-service-ca-bundle\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.221700 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.241581 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.253372 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9775f308-f5d8-4bc7-bfc0-00f065833a55-stats-auth\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.263395 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.283918 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.302363 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.339610 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.343960 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.348103 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d7de865-36cb-4d66-b97f-4e73717ea4a9-proxy-tls\") pod \"machine-config-controller-84d6567774-hj4jh\" (UID: \"8d7de865-36cb-4d66-b97f-4e73717ea4a9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.354960 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c77d403a-df91-47fe-bc4d-486dd5f7142f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gcsc8\" (UID: \"c77d403a-df91-47fe-bc4d-486dd5f7142f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.363173 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.383469 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.403114 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.422484 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.443112 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.463013 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.471326 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c77d403a-df91-47fe-bc4d-486dd5f7142f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gcsc8\" (UID: \"c77d403a-df91-47fe-bc4d-486dd5f7142f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.482650 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.523663 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.544454 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.559292 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f87b13cb-58d7-402e-b4cc-0206862019dd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmzq8\" (UID: \"f87b13cb-58d7-402e-b4cc-0206862019dd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.563004 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.567465 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b13cb-58d7-402e-b4cc-0206862019dd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmzq8\" (UID: \"f87b13cb-58d7-402e-b4cc-0206862019dd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.583285 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.603055 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.642793 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.662775 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.682950 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.702679 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.723428 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.742780 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.763180 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.780355 4796 request.go:700] Waited for 1.008182746s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0 Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.783161 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.803409 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.823002 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.842546 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.862096 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.882810 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.903444 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.921911 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.942088 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.962584 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 02 20:14:22 crc kubenswrapper[4796]: I1202 20:14:22.990501 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.003058 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.046721 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9m9r\" (UniqueName: \"kubernetes.io/projected/2deb1633-420a-4e19-bf12-0c853eb1da21-kube-api-access-n9m9r\") pod \"cluster-samples-operator-665b6dd947-ctm9n\" (UID: \"2deb1633-420a-4e19-bf12-0c853eb1da21\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.063954 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jndnc\" (UniqueName: \"kubernetes.io/projected/a3c8abe1-552e-404c-be1f-88f30e467d8f-kube-api-access-jndnc\") pod \"controller-manager-879f6c89f-zjrkr\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.082698 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.090184 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlpk7\" (UniqueName: \"kubernetes.io/projected/3ceb7b1c-b699-46ed-9571-0dd1fe3dce69-kube-api-access-dlpk7\") pod \"dns-operator-744455d44c-mvqcs\" (UID: \"3ceb7b1c-b699-46ed-9571-0dd1fe3dce69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mvqcs" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.102891 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.122953 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.143652 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.161990 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.183944 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.192137 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.202869 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.208912 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mvqcs" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.222585 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.235522 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.235750 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:14:23 crc kubenswrapper[4796]: E1202 20:14:23.235861 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:16:25.235806298 +0000 UTC m=+268.239181962 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.236101 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.236594 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.240701 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.244446 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.254931 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.262917 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.283476 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.303225 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.324881 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.338528 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.338902 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.343032 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.343509 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.345816 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.363139 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.385240 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.402758 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.423341 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.442724 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.462912 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.482855 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.491348 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.502674 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.507963 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.515758 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n"] Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.525093 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.541778 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.544831 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.563046 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.582912 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.603466 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.622785 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.643091 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.662134 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.670718 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zjrkr"] Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.674934 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mvqcs"] Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.682097 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.702374 4796 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.723278 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.741900 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.778584 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38aa3029-8a07-415a-9da2-9a0dca76fdb0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nq9r\" (UID: \"38aa3029-8a07-415a-9da2-9a0dca76fdb0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.781176 4796 request.go:700] Waited for 1.867613942s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.788547 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.799965 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qpxk\" (UniqueName: \"kubernetes.io/projected/f87b13cb-58d7-402e-b4cc-0206862019dd-kube-api-access-4qpxk\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmzq8\" (UID: \"f87b13cb-58d7-402e-b4cc-0206862019dd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.820308 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9mhn\" (UniqueName: \"kubernetes.io/projected/d39a70e8-5d53-431a-8413-bbb2041fc8dd-kube-api-access-v9mhn\") pod \"oauth-openshift-558db77b4-6tx4j\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.836671 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8f831a3-1114-43e1-b10b-d5b31905aa9e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qfgsc\" (UID: \"f8f831a3-1114-43e1-b10b-d5b31905aa9e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.854240 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.865736 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7fjz\" (UniqueName: \"kubernetes.io/projected/32ceedc0-31dc-4816-bf1c-30b63b08e98a-kube-api-access-k7fjz\") pod \"cluster-image-registry-operator-dc59b4c8b-bprtg\" (UID: \"32ceedc0-31dc-4816-bf1c-30b63b08e98a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.877577 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnw2v\" (UniqueName: \"kubernetes.io/projected/21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8-kube-api-access-qnw2v\") pod \"openshift-config-operator-7777fb866f-jfb6p\" (UID: \"21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.904319 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mz7v\" (UniqueName: \"kubernetes.io/projected/9525db23-79d5-4936-b9a3-7af09b979ad6-kube-api-access-4mz7v\") pod \"authentication-operator-69f744f599-kgf8g\" (UID: \"9525db23-79d5-4936-b9a3-7af09b979ad6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.937323 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.938032 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5wbr\" (UniqueName: \"kubernetes.io/projected/bbd52c61-04b7-424a-88c0-71653fd8d65e-kube-api-access-x5wbr\") pod \"route-controller-manager-6576b87f9c-lsdcg\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.958079 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32ceedc0-31dc-4816-bf1c-30b63b08e98a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bprtg\" (UID: \"32ceedc0-31dc-4816-bf1c-30b63b08e98a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.981955 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.987888 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l4mj\" (UniqueName: \"kubernetes.io/projected/e01c8d0d-e590-4e8f-96f0-85b5d7f29275-kube-api-access-6l4mj\") pod \"apiserver-76f77b778f-rmbh4\" (UID: \"e01c8d0d-e590-4e8f-96f0-85b5d7f29275\") " pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:23 crc kubenswrapper[4796]: I1202 20:14:23.998471 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvmmz\" (UniqueName: \"kubernetes.io/projected/a986dc8a-d877-4144-9ed4-6e30cff96787-kube-api-access-bvmmz\") pod \"catalog-operator-68c6474976-pmh58\" (UID: \"a986dc8a-d877-4144-9ed4-6e30cff96787\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.001995 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.027927 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mvqcs" event={"ID":"3ceb7b1c-b699-46ed-9571-0dd1fe3dce69","Type":"ContainerStarted","Data":"931de78a53640f5a88176bd03fba498e7653073090a65473ae8aa5e82643bbdc"} Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.028492 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgmlz\" (UniqueName: \"kubernetes.io/projected/8d7de865-36cb-4d66-b97f-4e73717ea4a9-kube-api-access-bgmlz\") pod \"machine-config-controller-84d6567774-hj4jh\" (UID: \"8d7de865-36cb-4d66-b97f-4e73717ea4a9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.029337 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" event={"ID":"a3c8abe1-552e-404c-be1f-88f30e467d8f","Type":"ContainerStarted","Data":"84d4767c47f2a8dd2d1d3144151f28e31aebd8745f6f06ce5185aa96652ff5df"} Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.037697 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbvcl\" (UniqueName: \"kubernetes.io/projected/9775f308-f5d8-4bc7-bfc0-00f065833a55-kube-api-access-lbvcl\") pod \"router-default-5444994796-hn5qp\" (UID: \"9775f308-f5d8-4bc7-bfc0-00f065833a55\") " pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.058149 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjzl5\" (UniqueName: \"kubernetes.io/projected/c77d403a-df91-47fe-bc4d-486dd5f7142f-kube-api-access-wjzl5\") pod \"openshift-controller-manager-operator-756b6f6bc6-gcsc8\" (UID: \"c77d403a-df91-47fe-bc4d-486dd5f7142f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.059054 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.065641 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.077719 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brrnz\" (UniqueName: \"kubernetes.io/projected/5e268554-349b-41e2-a9ec-64b6de541ace-kube-api-access-brrnz\") pod \"console-operator-58897d9998-ztcsx\" (UID: \"5e268554-349b-41e2-a9ec-64b6de541ace\") " pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.096293 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.101426 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr45n\" (UniqueName: \"kubernetes.io/projected/27bc5a48-7598-4ca1-a0ce-e08d8a694a13-kube-api-access-dr45n\") pod \"apiserver-7bbb656c7d-fh2w9\" (UID: \"27bc5a48-7598-4ca1-a0ce-e08d8a694a13\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.103856 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.114618 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.193523 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.193767 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.193836 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.194370 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e53767d-5052-4220-9645-b8d6d433a7df-console-oauth-config\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.194701 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a-config\") pod \"machine-api-operator-5694c8668f-p6qqx\" (UID: \"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.194740 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p6qqx\" (UID: \"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.194768 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-registry-certificates\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.194792 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-825qh\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-kube-api-access-825qh\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.194914 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-bound-sa-token\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.194938 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-console-config\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.195738 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.195800 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.195832 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.196066 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-trusted-ca\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.196100 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-registry-tls\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.196131 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a-images\") pod \"machine-api-operator-5694c8668f-p6qqx\" (UID: \"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.196241 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e53767d-5052-4220-9645-b8d6d433a7df-console-serving-cert\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.196413 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjkfk\" (UniqueName: \"kubernetes.io/projected/f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a-kube-api-access-gjkfk\") pod \"machine-api-operator-5694c8668f-p6qqx\" (UID: \"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.196548 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-service-ca\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: E1202 20:14:24.197899 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:24.697322413 +0000 UTC m=+147.700697977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.198575 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h99sr\" (UniqueName: \"kubernetes.io/projected/92175868-fbbe-4efc-824d-9af2cd73b853-kube-api-access-h99sr\") pod \"machine-approver-56656f9798-s22mc\" (UID: \"92175868-fbbe-4efc-824d-9af2cd73b853\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:24 crc kubenswrapper[4796]: W1202 20:14:24.200854 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-4dbb3b0d4d84497dc4df67bb9ad87cddd34732865ce7df1815e1896644e837e9 WatchSource:0}: Error finding container 4dbb3b0d4d84497dc4df67bb9ad87cddd34732865ce7df1815e1896644e837e9: Status 404 returned error can't find the container with id 4dbb3b0d4d84497dc4df67bb9ad87cddd34732865ce7df1815e1896644e837e9 Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.208798 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.293607 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.297864 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.298460 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.298547 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e53767d-5052-4220-9645-b8d6d433a7df-console-serving-cert\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.298572 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjkfk\" (UniqueName: \"kubernetes.io/projected/f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a-kube-api-access-gjkfk\") pod \"machine-api-operator-5694c8668f-p6qqx\" (UID: \"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: E1202 20:14:24.298722 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:24.798649907 +0000 UTC m=+147.802025541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299288 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03e312db-cfac-4e22-811b-a00d1b0a2902-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8vbxj\" (UID: \"03e312db-cfac-4e22-811b-a00d1b0a2902\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299342 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/03e312db-cfac-4e22-811b-a00d1b0a2902-metrics-tls\") pod \"ingress-operator-5b745b69d9-8vbxj\" (UID: \"03e312db-cfac-4e22-811b-a00d1b0a2902\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299382 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-service-ca\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299443 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7txm\" (UniqueName: \"kubernetes.io/projected/03e312db-cfac-4e22-811b-a00d1b0a2902-kube-api-access-f7txm\") pod \"ingress-operator-5b745b69d9-8vbxj\" (UID: \"03e312db-cfac-4e22-811b-a00d1b0a2902\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299502 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e53767d-5052-4220-9645-b8d6d433a7df-console-oauth-config\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299520 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-oauth-serving-cert\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299537 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/518ae85f-81af-4700-bb54-f6bdae637b1c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r5ddg\" (UID: \"518ae85f-81af-4700-bb54-f6bdae637b1c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r5ddg" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299592 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p6qqx\" (UID: \"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299658 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-registry-certificates\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299679 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-825qh\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-kube-api-access-825qh\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299708 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-bound-sa-token\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299753 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-console-config\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299772 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299810 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxh5t\" (UniqueName: \"kubernetes.io/projected/8e53767d-5052-4220-9645-b8d6d433a7df-kube-api-access-fxh5t\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299834 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-trusted-ca-bundle\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299860 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299879 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03e312db-cfac-4e22-811b-a00d1b0a2902-trusted-ca\") pod \"ingress-operator-5b745b69d9-8vbxj\" (UID: \"03e312db-cfac-4e22-811b-a00d1b0a2902\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299925 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-trusted-ca\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299945 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-registry-tls\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299963 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a-images\") pod \"machine-api-operator-5694c8668f-p6qqx\" (UID: \"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.299986 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed72b389-0d73-447b-946b-374b82b9d3eb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sxkvc\" (UID: \"ed72b389-0d73-447b-946b-374b82b9d3eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.300016 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ll85\" (UniqueName: \"kubernetes.io/projected/e716e73a-1881-44ca-8af0-fb9defd9645b-kube-api-access-8ll85\") pod \"downloads-7954f5f757-8jxpl\" (UID: \"e716e73a-1881-44ca-8af0-fb9defd9645b\") " pod="openshift-console/downloads-7954f5f757-8jxpl" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.300047 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a-config\") pod \"machine-api-operator-5694c8668f-p6qqx\" (UID: \"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.300067 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed72b389-0d73-447b-946b-374b82b9d3eb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sxkvc\" (UID: \"ed72b389-0d73-447b-946b-374b82b9d3eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.300085 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9dxh\" (UniqueName: \"kubernetes.io/projected/ed72b389-0d73-447b-946b-374b82b9d3eb-kube-api-access-t9dxh\") pod \"openshift-apiserver-operator-796bbdcf4f-sxkvc\" (UID: \"ed72b389-0d73-447b-946b-374b82b9d3eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.300876 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-service-ca\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.301547 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.301698 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a-images\") pod \"machine-api-operator-5694c8668f-p6qqx\" (UID: \"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.301922 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-registry-certificates\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: E1202 20:14:24.301965 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:24.801948007 +0000 UTC m=+147.805323551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.302109 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjw8w\" (UniqueName: \"kubernetes.io/projected/518ae85f-81af-4700-bb54-f6bdae637b1c-kube-api-access-vjw8w\") pod \"multus-admission-controller-857f4d67dd-r5ddg\" (UID: \"518ae85f-81af-4700-bb54-f6bdae637b1c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r5ddg" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.303617 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a-config\") pod \"machine-api-operator-5694c8668f-p6qqx\" (UID: \"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.303768 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-trusted-ca\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.307072 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.308310 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-registry-tls\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.308486 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e53767d-5052-4220-9645-b8d6d433a7df-console-serving-cert\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.309424 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-console-config\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.310457 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p6qqx\" (UID: \"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.320392 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.349744 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e53767d-5052-4220-9645-b8d6d433a7df-console-oauth-config\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.360496 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjkfk\" (UniqueName: \"kubernetes.io/projected/f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a-kube-api-access-gjkfk\") pod \"machine-api-operator-5694c8668f-p6qqx\" (UID: \"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.363815 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-825qh\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-kube-api-access-825qh\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.386176 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-bound-sa-token\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.403555 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:24 crc kubenswrapper[4796]: E1202 20:14:24.403815 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:24.903736261 +0000 UTC m=+147.907111795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.404809 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed72b389-0d73-447b-946b-374b82b9d3eb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sxkvc\" (UID: \"ed72b389-0d73-447b-946b-374b82b9d3eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.404882 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b273d88f-06a4-4c2d-9a6d-339954a14abd-proxy-tls\") pod \"machine-config-operator-74547568cd-bfwqk\" (UID: \"b273d88f-06a4-4c2d-9a6d-339954a14abd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.404926 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ll85\" (UniqueName: \"kubernetes.io/projected/e716e73a-1881-44ca-8af0-fb9defd9645b-kube-api-access-8ll85\") pod \"downloads-7954f5f757-8jxpl\" (UID: \"e716e73a-1881-44ca-8af0-fb9defd9645b\") " pod="openshift-console/downloads-7954f5f757-8jxpl" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.404956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x28m2\" (UniqueName: \"kubernetes.io/projected/3c7bea3f-e468-4655-b241-ae19d336c6c0-kube-api-access-x28m2\") pod \"control-plane-machine-set-operator-78cbb6b69f-8qqvs\" (UID: \"3c7bea3f-e468-4655-b241-ae19d336c6c0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.404998 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-registration-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405029 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trzmf\" (UniqueName: \"kubernetes.io/projected/1bd7655e-5c79-4963-bb2a-c8b55f131a39-kube-api-access-trzmf\") pod \"service-ca-operator-777779d784-54d99\" (UID: \"1bd7655e-5c79-4963-bb2a-c8b55f131a39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405059 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2hmv\" (UniqueName: \"kubernetes.io/projected/fd1a5c31-4bd7-4634-994e-d3681eaae556-kube-api-access-r2hmv\") pod \"ingress-canary-mfxpp\" (UID: \"fd1a5c31-4bd7-4634-994e-d3681eaae556\") " pod="openshift-ingress-canary/ingress-canary-mfxpp" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405085 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/216c9e89-3d03-460f-b45a-da07125e4109-srv-cert\") pod \"olm-operator-6b444d44fb-zhfv7\" (UID: \"216c9e89-3d03-460f-b45a-da07125e4109\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405104 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d6gl\" (UniqueName: \"kubernetes.io/projected/bfed3adf-1892-4da3-8ffc-f4033036a4ca-kube-api-access-2d6gl\") pod \"collect-profiles-29411760-zzp4z\" (UID: \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405157 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/74234e67-0f4f-4706-95a6-e04ac3ec94a5-signing-cabundle\") pod \"service-ca-9c57cc56f-cjf62\" (UID: \"74234e67-0f4f-4706-95a6-e04ac3ec94a5\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405203 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed72b389-0d73-447b-946b-374b82b9d3eb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sxkvc\" (UID: \"ed72b389-0d73-447b-946b-374b82b9d3eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405224 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4br7\" (UniqueName: \"kubernetes.io/projected/99b7235a-d9b2-489c-b437-1aac26d04cc1-kube-api-access-x4br7\") pod \"packageserver-d55dfcdfc-rlp96\" (UID: \"99b7235a-d9b2-489c-b437-1aac26d04cc1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405245 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9dxh\" (UniqueName: \"kubernetes.io/projected/ed72b389-0d73-447b-946b-374b82b9d3eb-kube-api-access-t9dxh\") pod \"openshift-apiserver-operator-796bbdcf4f-sxkvc\" (UID: \"ed72b389-0d73-447b-946b-374b82b9d3eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405325 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjw8w\" (UniqueName: \"kubernetes.io/projected/518ae85f-81af-4700-bb54-f6bdae637b1c-kube-api-access-vjw8w\") pod \"multus-admission-controller-857f4d67dd-r5ddg\" (UID: \"518ae85f-81af-4700-bb54-f6bdae637b1c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r5ddg" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405357 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wt8x\" (UniqueName: \"kubernetes.io/projected/b43a5c93-cc27-4971-8cbc-453a421990c6-kube-api-access-2wt8x\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405425 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b43a5c93-cc27-4971-8cbc-453a421990c6-etcd-client\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405490 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b273d88f-06a4-4c2d-9a6d-339954a14abd-images\") pod \"machine-config-operator-74547568cd-bfwqk\" (UID: \"b273d88f-06a4-4c2d-9a6d-339954a14abd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405542 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03e312db-cfac-4e22-811b-a00d1b0a2902-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8vbxj\" (UID: \"03e312db-cfac-4e22-811b-a00d1b0a2902\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405567 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b43a5c93-cc27-4971-8cbc-453a421990c6-etcd-ca\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405600 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txzpl\" (UniqueName: \"kubernetes.io/projected/934c0aac-761d-43af-b40b-34e6d135b9fe-kube-api-access-txzpl\") pod \"machine-config-server-ptmbv\" (UID: \"934c0aac-761d-43af-b40b-34e6d135b9fe\") " pod="openshift-machine-config-operator/machine-config-server-ptmbv" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405788 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2gdh\" (UniqueName: \"kubernetes.io/projected/216c9e89-3d03-460f-b45a-da07125e4109-kube-api-access-s2gdh\") pod \"olm-operator-6b444d44fb-zhfv7\" (UID: \"216c9e89-3d03-460f-b45a-da07125e4109\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405808 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/99b7235a-d9b2-489c-b437-1aac26d04cc1-apiservice-cert\") pod \"packageserver-d55dfcdfc-rlp96\" (UID: \"99b7235a-d9b2-489c-b437-1aac26d04cc1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405827 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/03e312db-cfac-4e22-811b-a00d1b0a2902-metrics-tls\") pod \"ingress-operator-5b745b69d9-8vbxj\" (UID: \"03e312db-cfac-4e22-811b-a00d1b0a2902\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405846 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/934c0aac-761d-43af-b40b-34e6d135b9fe-node-bootstrap-token\") pod \"machine-config-server-ptmbv\" (UID: \"934c0aac-761d-43af-b40b-34e6d135b9fe\") " pod="openshift-machine-config-operator/machine-config-server-ptmbv" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405866 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn55s\" (UniqueName: \"kubernetes.io/projected/8eadd752-dac7-4cc8-8e44-11f8b3f35c74-kube-api-access-zn55s\") pod \"dns-default-slnp6\" (UID: \"8eadd752-dac7-4cc8-8e44-11f8b3f35c74\") " pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405893 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfed3adf-1892-4da3-8ffc-f4033036a4ca-secret-volume\") pod \"collect-profiles-29411760-zzp4z\" (UID: \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405937 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a51e701-99f5-423c-a413-464a283751f4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8nh9j\" (UID: \"0a51e701-99f5-423c-a413-464a283751f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405958 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8849ec8b-1b79-4779-ae4c-56143ff6a938-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fqd2f\" (UID: \"8849ec8b-1b79-4779-ae4c-56143ff6a938\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.405987 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/934c0aac-761d-43af-b40b-34e6d135b9fe-certs\") pod \"machine-config-server-ptmbv\" (UID: \"934c0aac-761d-43af-b40b-34e6d135b9fe\") " pod="openshift-machine-config-operator/machine-config-server-ptmbv" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.406004 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-mountpoint-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.406020 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/99b7235a-d9b2-489c-b437-1aac26d04cc1-tmpfs\") pod \"packageserver-d55dfcdfc-rlp96\" (UID: \"99b7235a-d9b2-489c-b437-1aac26d04cc1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.406039 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7txm\" (UniqueName: \"kubernetes.io/projected/03e312db-cfac-4e22-811b-a00d1b0a2902-kube-api-access-f7txm\") pod \"ingress-operator-5b745b69d9-8vbxj\" (UID: \"03e312db-cfac-4e22-811b-a00d1b0a2902\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.406895 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-oauth-serving-cert\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.406922 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eadd752-dac7-4cc8-8e44-11f8b3f35c74-metrics-tls\") pod \"dns-default-slnp6\" (UID: \"8eadd752-dac7-4cc8-8e44-11f8b3f35c74\") " pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.407017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43a5c93-cc27-4971-8cbc-453a421990c6-config\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.407043 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd7655e-5c79-4963-bb2a-c8b55f131a39-config\") pod \"service-ca-operator-777779d784-54d99\" (UID: \"1bd7655e-5c79-4963-bb2a-c8b55f131a39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.407076 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b43a5c93-cc27-4971-8cbc-453a421990c6-serving-cert\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.407109 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/518ae85f-81af-4700-bb54-f6bdae637b1c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r5ddg\" (UID: \"518ae85f-81af-4700-bb54-f6bdae637b1c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r5ddg" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.407130 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mkff\" (UniqueName: \"kubernetes.io/projected/8849ec8b-1b79-4779-ae4c-56143ff6a938-kube-api-access-8mkff\") pod \"package-server-manager-789f6589d5-fqd2f\" (UID: \"8849ec8b-1b79-4779-ae4c-56143ff6a938\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.407223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/161618bd-f935-4923-95bb-70d4cfd6eb1f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c6b87\" (UID: \"161618bd-f935-4923-95bb-70d4cfd6eb1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.407724 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eadd752-dac7-4cc8-8e44-11f8b3f35c74-config-volume\") pod \"dns-default-slnp6\" (UID: \"8eadd752-dac7-4cc8-8e44-11f8b3f35c74\") " pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.407753 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfed3adf-1892-4da3-8ffc-f4033036a4ca-config-volume\") pod \"collect-profiles-29411760-zzp4z\" (UID: \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.407808 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/216c9e89-3d03-460f-b45a-da07125e4109-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zhfv7\" (UID: \"216c9e89-3d03-460f-b45a-da07125e4109\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.407835 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b43a5c93-cc27-4971-8cbc-453a421990c6-etcd-service-ca\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.407854 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-plugins-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.407940 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c7bea3f-e468-4655-b241-ae19d336c6c0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8qqvs\" (UID: \"3c7bea3f-e468-4655-b241-ae19d336c6c0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.408231 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5rzg\" (UniqueName: \"kubernetes.io/projected/0a51e701-99f5-423c-a413-464a283751f4-kube-api-access-l5rzg\") pod \"marketplace-operator-79b997595-8nh9j\" (UID: \"0a51e701-99f5-423c-a413-464a283751f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.408289 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd1a5c31-4bd7-4634-994e-d3681eaae556-cert\") pod \"ingress-canary-mfxpp\" (UID: \"fd1a5c31-4bd7-4634-994e-d3681eaae556\") " pod="openshift-ingress-canary/ingress-canary-mfxpp" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.408323 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxh5t\" (UniqueName: \"kubernetes.io/projected/8e53767d-5052-4220-9645-b8d6d433a7df-kube-api-access-fxh5t\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.408370 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-trusted-ca-bundle\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.408494 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed72b389-0d73-447b-946b-374b82b9d3eb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sxkvc\" (UID: \"ed72b389-0d73-447b-946b-374b82b9d3eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.408812 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.408844 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/161618bd-f935-4923-95bb-70d4cfd6eb1f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c6b87\" (UID: \"161618bd-f935-4923-95bb-70d4cfd6eb1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.408866 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-csi-data-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.411290 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03e312db-cfac-4e22-811b-a00d1b0a2902-trusted-ca\") pod \"ingress-operator-5b745b69d9-8vbxj\" (UID: \"03e312db-cfac-4e22-811b-a00d1b0a2902\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.411332 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/74234e67-0f4f-4706-95a6-e04ac3ec94a5-signing-key\") pod \"service-ca-9c57cc56f-cjf62\" (UID: \"74234e67-0f4f-4706-95a6-e04ac3ec94a5\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.411355 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bd7655e-5c79-4963-bb2a-c8b55f131a39-serving-cert\") pod \"service-ca-operator-777779d784-54d99\" (UID: \"1bd7655e-5c79-4963-bb2a-c8b55f131a39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.411469 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-socket-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.411516 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xjz\" (UniqueName: \"kubernetes.io/projected/4fc5c4c1-71e4-47c6-96f5-62e69c97cb63-kube-api-access-67xjz\") pod \"migrator-59844c95c7-fpclt\" (UID: \"4fc5c4c1-71e4-47c6-96f5-62e69c97cb63\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fpclt" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.411872 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-oauth-serving-cert\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.412171 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-trusted-ca-bundle\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: E1202 20:14:24.412416 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:24.91239337 +0000 UTC m=+147.915768904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.412528 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv7f8\" (UniqueName: \"kubernetes.io/projected/74234e67-0f4f-4706-95a6-e04ac3ec94a5-kube-api-access-nv7f8\") pod \"service-ca-9c57cc56f-cjf62\" (UID: \"74234e67-0f4f-4706-95a6-e04ac3ec94a5\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.412644 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03e312db-cfac-4e22-811b-a00d1b0a2902-trusted-ca\") pod \"ingress-operator-5b745b69d9-8vbxj\" (UID: \"03e312db-cfac-4e22-811b-a00d1b0a2902\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.412730 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mlwv\" (UniqueName: \"kubernetes.io/projected/298ad988-5006-4eb8-8f3b-72bc50af51a6-kube-api-access-2mlwv\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.412802 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b273d88f-06a4-4c2d-9a6d-339954a14abd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bfwqk\" (UID: \"b273d88f-06a4-4c2d-9a6d-339954a14abd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.412899 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsjqz\" (UniqueName: \"kubernetes.io/projected/b273d88f-06a4-4c2d-9a6d-339954a14abd-kube-api-access-hsjqz\") pod \"machine-config-operator-74547568cd-bfwqk\" (UID: \"b273d88f-06a4-4c2d-9a6d-339954a14abd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.413040 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a51e701-99f5-423c-a413-464a283751f4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8nh9j\" (UID: \"0a51e701-99f5-423c-a413-464a283751f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.413321 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/99b7235a-d9b2-489c-b437-1aac26d04cc1-webhook-cert\") pod \"packageserver-d55dfcdfc-rlp96\" (UID: \"99b7235a-d9b2-489c-b437-1aac26d04cc1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.413589 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/161618bd-f935-4923-95bb-70d4cfd6eb1f-config\") pod \"kube-apiserver-operator-766d6c64bb-c6b87\" (UID: \"161618bd-f935-4923-95bb-70d4cfd6eb1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.414567 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/03e312db-cfac-4e22-811b-a00d1b0a2902-metrics-tls\") pod \"ingress-operator-5b745b69d9-8vbxj\" (UID: \"03e312db-cfac-4e22-811b-a00d1b0a2902\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.414625 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed72b389-0d73-447b-946b-374b82b9d3eb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sxkvc\" (UID: \"ed72b389-0d73-447b-946b-374b82b9d3eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.416203 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/518ae85f-81af-4700-bb54-f6bdae637b1c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r5ddg\" (UID: \"518ae85f-81af-4700-bb54-f6bdae637b1c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r5ddg" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.457210 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8"] Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.465959 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ll85\" (UniqueName: \"kubernetes.io/projected/e716e73a-1881-44ca-8af0-fb9defd9645b-kube-api-access-8ll85\") pod \"downloads-7954f5f757-8jxpl\" (UID: \"e716e73a-1881-44ca-8af0-fb9defd9645b\") " pod="openshift-console/downloads-7954f5f757-8jxpl" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.484661 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03e312db-cfac-4e22-811b-a00d1b0a2902-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8vbxj\" (UID: \"03e312db-cfac-4e22-811b-a00d1b0a2902\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.511373 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9dxh\" (UniqueName: \"kubernetes.io/projected/ed72b389-0d73-447b-946b-374b82b9d3eb-kube-api-access-t9dxh\") pod \"openshift-apiserver-operator-796bbdcf4f-sxkvc\" (UID: \"ed72b389-0d73-447b-946b-374b82b9d3eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.515222 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.515727 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8849ec8b-1b79-4779-ae4c-56143ff6a938-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fqd2f\" (UID: \"8849ec8b-1b79-4779-ae4c-56143ff6a938\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.515762 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/934c0aac-761d-43af-b40b-34e6d135b9fe-certs\") pod \"machine-config-server-ptmbv\" (UID: \"934c0aac-761d-43af-b40b-34e6d135b9fe\") " pod="openshift-machine-config-operator/machine-config-server-ptmbv" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.515790 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-mountpoint-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.515815 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/99b7235a-d9b2-489c-b437-1aac26d04cc1-tmpfs\") pod \"packageserver-d55dfcdfc-rlp96\" (UID: \"99b7235a-d9b2-489c-b437-1aac26d04cc1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.515854 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eadd752-dac7-4cc8-8e44-11f8b3f35c74-metrics-tls\") pod \"dns-default-slnp6\" (UID: \"8eadd752-dac7-4cc8-8e44-11f8b3f35c74\") " pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.515879 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43a5c93-cc27-4971-8cbc-453a421990c6-config\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.515900 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd7655e-5c79-4963-bb2a-c8b55f131a39-config\") pod \"service-ca-operator-777779d784-54d99\" (UID: \"1bd7655e-5c79-4963-bb2a-c8b55f131a39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.515924 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b43a5c93-cc27-4971-8cbc-453a421990c6-serving-cert\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.515950 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mkff\" (UniqueName: \"kubernetes.io/projected/8849ec8b-1b79-4779-ae4c-56143ff6a938-kube-api-access-8mkff\") pod \"package-server-manager-789f6589d5-fqd2f\" (UID: \"8849ec8b-1b79-4779-ae4c-56143ff6a938\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.515976 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/161618bd-f935-4923-95bb-70d4cfd6eb1f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c6b87\" (UID: \"161618bd-f935-4923-95bb-70d4cfd6eb1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.515999 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eadd752-dac7-4cc8-8e44-11f8b3f35c74-config-volume\") pod \"dns-default-slnp6\" (UID: \"8eadd752-dac7-4cc8-8e44-11f8b3f35c74\") " pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516020 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfed3adf-1892-4da3-8ffc-f4033036a4ca-config-volume\") pod \"collect-profiles-29411760-zzp4z\" (UID: \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516046 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/216c9e89-3d03-460f-b45a-da07125e4109-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zhfv7\" (UID: \"216c9e89-3d03-460f-b45a-da07125e4109\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516067 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b43a5c93-cc27-4971-8cbc-453a421990c6-etcd-service-ca\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516092 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-plugins-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516121 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c7bea3f-e468-4655-b241-ae19d336c6c0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8qqvs\" (UID: \"3c7bea3f-e468-4655-b241-ae19d336c6c0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516150 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5rzg\" (UniqueName: \"kubernetes.io/projected/0a51e701-99f5-423c-a413-464a283751f4-kube-api-access-l5rzg\") pod \"marketplace-operator-79b997595-8nh9j\" (UID: \"0a51e701-99f5-423c-a413-464a283751f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516172 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd1a5c31-4bd7-4634-994e-d3681eaae556-cert\") pod \"ingress-canary-mfxpp\" (UID: \"fd1a5c31-4bd7-4634-994e-d3681eaae556\") " pod="openshift-ingress-canary/ingress-canary-mfxpp" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516232 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/161618bd-f935-4923-95bb-70d4cfd6eb1f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c6b87\" (UID: \"161618bd-f935-4923-95bb-70d4cfd6eb1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516605 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-csi-data-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516677 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/74234e67-0f4f-4706-95a6-e04ac3ec94a5-signing-key\") pod \"service-ca-9c57cc56f-cjf62\" (UID: \"74234e67-0f4f-4706-95a6-e04ac3ec94a5\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516702 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bd7655e-5c79-4963-bb2a-c8b55f131a39-serving-cert\") pod \"service-ca-operator-777779d784-54d99\" (UID: \"1bd7655e-5c79-4963-bb2a-c8b55f131a39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516728 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-socket-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516754 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67xjz\" (UniqueName: \"kubernetes.io/projected/4fc5c4c1-71e4-47c6-96f5-62e69c97cb63-kube-api-access-67xjz\") pod \"migrator-59844c95c7-fpclt\" (UID: \"4fc5c4c1-71e4-47c6-96f5-62e69c97cb63\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fpclt" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516781 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv7f8\" (UniqueName: \"kubernetes.io/projected/74234e67-0f4f-4706-95a6-e04ac3ec94a5-kube-api-access-nv7f8\") pod \"service-ca-9c57cc56f-cjf62\" (UID: \"74234e67-0f4f-4706-95a6-e04ac3ec94a5\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.516832 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mlwv\" (UniqueName: \"kubernetes.io/projected/298ad988-5006-4eb8-8f3b-72bc50af51a6-kube-api-access-2mlwv\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518164 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b273d88f-06a4-4c2d-9a6d-339954a14abd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bfwqk\" (UID: \"b273d88f-06a4-4c2d-9a6d-339954a14abd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518313 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsjqz\" (UniqueName: \"kubernetes.io/projected/b273d88f-06a4-4c2d-9a6d-339954a14abd-kube-api-access-hsjqz\") pod \"machine-config-operator-74547568cd-bfwqk\" (UID: \"b273d88f-06a4-4c2d-9a6d-339954a14abd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518338 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a51e701-99f5-423c-a413-464a283751f4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8nh9j\" (UID: \"0a51e701-99f5-423c-a413-464a283751f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518374 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/99b7235a-d9b2-489c-b437-1aac26d04cc1-webhook-cert\") pod \"packageserver-d55dfcdfc-rlp96\" (UID: \"99b7235a-d9b2-489c-b437-1aac26d04cc1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518405 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/161618bd-f935-4923-95bb-70d4cfd6eb1f-config\") pod \"kube-apiserver-operator-766d6c64bb-c6b87\" (UID: \"161618bd-f935-4923-95bb-70d4cfd6eb1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518428 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b273d88f-06a4-4c2d-9a6d-339954a14abd-proxy-tls\") pod \"machine-config-operator-74547568cd-bfwqk\" (UID: \"b273d88f-06a4-4c2d-9a6d-339954a14abd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518459 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x28m2\" (UniqueName: \"kubernetes.io/projected/3c7bea3f-e468-4655-b241-ae19d336c6c0-kube-api-access-x28m2\") pod \"control-plane-machine-set-operator-78cbb6b69f-8qqvs\" (UID: \"3c7bea3f-e468-4655-b241-ae19d336c6c0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518485 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-registration-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518511 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trzmf\" (UniqueName: \"kubernetes.io/projected/1bd7655e-5c79-4963-bb2a-c8b55f131a39-kube-api-access-trzmf\") pod \"service-ca-operator-777779d784-54d99\" (UID: \"1bd7655e-5c79-4963-bb2a-c8b55f131a39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518539 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2hmv\" (UniqueName: \"kubernetes.io/projected/fd1a5c31-4bd7-4634-994e-d3681eaae556-kube-api-access-r2hmv\") pod \"ingress-canary-mfxpp\" (UID: \"fd1a5c31-4bd7-4634-994e-d3681eaae556\") " pod="openshift-ingress-canary/ingress-canary-mfxpp" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518561 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/216c9e89-3d03-460f-b45a-da07125e4109-srv-cert\") pod \"olm-operator-6b444d44fb-zhfv7\" (UID: \"216c9e89-3d03-460f-b45a-da07125e4109\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518597 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d6gl\" (UniqueName: \"kubernetes.io/projected/bfed3adf-1892-4da3-8ffc-f4033036a4ca-kube-api-access-2d6gl\") pod \"collect-profiles-29411760-zzp4z\" (UID: \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518626 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/74234e67-0f4f-4706-95a6-e04ac3ec94a5-signing-cabundle\") pod \"service-ca-9c57cc56f-cjf62\" (UID: \"74234e67-0f4f-4706-95a6-e04ac3ec94a5\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518653 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4br7\" (UniqueName: \"kubernetes.io/projected/99b7235a-d9b2-489c-b437-1aac26d04cc1-kube-api-access-x4br7\") pod \"packageserver-d55dfcdfc-rlp96\" (UID: \"99b7235a-d9b2-489c-b437-1aac26d04cc1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518695 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wt8x\" (UniqueName: \"kubernetes.io/projected/b43a5c93-cc27-4971-8cbc-453a421990c6-kube-api-access-2wt8x\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518727 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b43a5c93-cc27-4971-8cbc-453a421990c6-etcd-client\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518757 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b273d88f-06a4-4c2d-9a6d-339954a14abd-images\") pod \"machine-config-operator-74547568cd-bfwqk\" (UID: \"b273d88f-06a4-4c2d-9a6d-339954a14abd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518782 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b43a5c93-cc27-4971-8cbc-453a421990c6-etcd-ca\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518807 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txzpl\" (UniqueName: \"kubernetes.io/projected/934c0aac-761d-43af-b40b-34e6d135b9fe-kube-api-access-txzpl\") pod \"machine-config-server-ptmbv\" (UID: \"934c0aac-761d-43af-b40b-34e6d135b9fe\") " pod="openshift-machine-config-operator/machine-config-server-ptmbv" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518830 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2gdh\" (UniqueName: \"kubernetes.io/projected/216c9e89-3d03-460f-b45a-da07125e4109-kube-api-access-s2gdh\") pod \"olm-operator-6b444d44fb-zhfv7\" (UID: \"216c9e89-3d03-460f-b45a-da07125e4109\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518857 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/99b7235a-d9b2-489c-b437-1aac26d04cc1-apiservice-cert\") pod \"packageserver-d55dfcdfc-rlp96\" (UID: \"99b7235a-d9b2-489c-b437-1aac26d04cc1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518882 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/934c0aac-761d-43af-b40b-34e6d135b9fe-node-bootstrap-token\") pod \"machine-config-server-ptmbv\" (UID: \"934c0aac-761d-43af-b40b-34e6d135b9fe\") " pod="openshift-machine-config-operator/machine-config-server-ptmbv" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn55s\" (UniqueName: \"kubernetes.io/projected/8eadd752-dac7-4cc8-8e44-11f8b3f35c74-kube-api-access-zn55s\") pod \"dns-default-slnp6\" (UID: \"8eadd752-dac7-4cc8-8e44-11f8b3f35c74\") " pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518931 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfed3adf-1892-4da3-8ffc-f4033036a4ca-secret-volume\") pod \"collect-profiles-29411760-zzp4z\" (UID: \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.518967 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a51e701-99f5-423c-a413-464a283751f4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8nh9j\" (UID: \"0a51e701-99f5-423c-a413-464a283751f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.527342 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfed3adf-1892-4da3-8ffc-f4033036a4ca-config-volume\") pod \"collect-profiles-29411760-zzp4z\" (UID: \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.528381 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a51e701-99f5-423c-a413-464a283751f4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8nh9j\" (UID: \"0a51e701-99f5-423c-a413-464a283751f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.532223 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eadd752-dac7-4cc8-8e44-11f8b3f35c74-config-volume\") pod \"dns-default-slnp6\" (UID: \"8eadd752-dac7-4cc8-8e44-11f8b3f35c74\") " pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.533396 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-plugins-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.533482 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-csi-data-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.535110 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b43a5c93-cc27-4971-8cbc-453a421990c6-etcd-service-ca\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.536364 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b273d88f-06a4-4c2d-9a6d-339954a14abd-images\") pod \"machine-config-operator-74547568cd-bfwqk\" (UID: \"b273d88f-06a4-4c2d-9a6d-339954a14abd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.537219 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b43a5c93-cc27-4971-8cbc-453a421990c6-etcd-ca\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.537740 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-registration-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.537896 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-socket-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.539989 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/161618bd-f935-4923-95bb-70d4cfd6eb1f-config\") pod \"kube-apiserver-operator-766d6c64bb-c6b87\" (UID: \"161618bd-f935-4923-95bb-70d4cfd6eb1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.540651 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7txm\" (UniqueName: \"kubernetes.io/projected/03e312db-cfac-4e22-811b-a00d1b0a2902-kube-api-access-f7txm\") pod \"ingress-operator-5b745b69d9-8vbxj\" (UID: \"03e312db-cfac-4e22-811b-a00d1b0a2902\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.542739 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b273d88f-06a4-4c2d-9a6d-339954a14abd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bfwqk\" (UID: \"b273d88f-06a4-4c2d-9a6d-339954a14abd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.543115 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/99b7235a-d9b2-489c-b437-1aac26d04cc1-tmpfs\") pod \"packageserver-d55dfcdfc-rlp96\" (UID: \"99b7235a-d9b2-489c-b437-1aac26d04cc1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.549997 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c7bea3f-e468-4655-b241-ae19d336c6c0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8qqvs\" (UID: \"3c7bea3f-e468-4655-b241-ae19d336c6c0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.552754 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43a5c93-cc27-4971-8cbc-453a421990c6-config\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.553394 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd7655e-5c79-4963-bb2a-c8b55f131a39-config\") pod \"service-ca-operator-777779d784-54d99\" (UID: \"1bd7655e-5c79-4963-bb2a-c8b55f131a39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.553827 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/161618bd-f935-4923-95bb-70d4cfd6eb1f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c6b87\" (UID: \"161618bd-f935-4923-95bb-70d4cfd6eb1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.553953 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjw8w\" (UniqueName: \"kubernetes.io/projected/518ae85f-81af-4700-bb54-f6bdae637b1c-kube-api-access-vjw8w\") pod \"multus-admission-controller-857f4d67dd-r5ddg\" (UID: \"518ae85f-81af-4700-bb54-f6bdae637b1c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r5ddg" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.556708 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/74234e67-0f4f-4706-95a6-e04ac3ec94a5-signing-cabundle\") pod \"service-ca-9c57cc56f-cjf62\" (UID: \"74234e67-0f4f-4706-95a6-e04ac3ec94a5\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.556759 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/99b7235a-d9b2-489c-b437-1aac26d04cc1-apiservice-cert\") pod \"packageserver-d55dfcdfc-rlp96\" (UID: \"99b7235a-d9b2-489c-b437-1aac26d04cc1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.556791 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/74234e67-0f4f-4706-95a6-e04ac3ec94a5-signing-key\") pod \"service-ca-9c57cc56f-cjf62\" (UID: \"74234e67-0f4f-4706-95a6-e04ac3ec94a5\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.556905 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/298ad988-5006-4eb8-8f3b-72bc50af51a6-mountpoint-dir\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.557038 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/934c0aac-761d-43af-b40b-34e6d135b9fe-certs\") pod \"machine-config-server-ptmbv\" (UID: \"934c0aac-761d-43af-b40b-34e6d135b9fe\") " pod="openshift-machine-config-operator/machine-config-server-ptmbv" Dec 02 20:14:24 crc kubenswrapper[4796]: E1202 20:14:24.557382 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:25.057358159 +0000 UTC m=+148.060733683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.557920 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eadd752-dac7-4cc8-8e44-11f8b3f35c74-metrics-tls\") pod \"dns-default-slnp6\" (UID: \"8eadd752-dac7-4cc8-8e44-11f8b3f35c74\") " pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.557949 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.559700 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/99b7235a-d9b2-489c-b437-1aac26d04cc1-webhook-cert\") pod \"packageserver-d55dfcdfc-rlp96\" (UID: \"99b7235a-d9b2-489c-b437-1aac26d04cc1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.561753 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b43a5c93-cc27-4971-8cbc-453a421990c6-serving-cert\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.562276 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd1a5c31-4bd7-4634-994e-d3681eaae556-cert\") pod \"ingress-canary-mfxpp\" (UID: \"fd1a5c31-4bd7-4634-994e-d3681eaae556\") " pod="openshift-ingress-canary/ingress-canary-mfxpp" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.568794 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfed3adf-1892-4da3-8ffc-f4033036a4ca-secret-volume\") pod \"collect-profiles-29411760-zzp4z\" (UID: \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.570193 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8849ec8b-1b79-4779-ae4c-56143ff6a938-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fqd2f\" (UID: \"8849ec8b-1b79-4779-ae4c-56143ff6a938\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.571489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/216c9e89-3d03-460f-b45a-da07125e4109-srv-cert\") pod \"olm-operator-6b444d44fb-zhfv7\" (UID: \"216c9e89-3d03-460f-b45a-da07125e4109\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.578363 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.578818 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b273d88f-06a4-4c2d-9a6d-339954a14abd-proxy-tls\") pod \"machine-config-operator-74547568cd-bfwqk\" (UID: \"b273d88f-06a4-4c2d-9a6d-339954a14abd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.578919 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/216c9e89-3d03-460f-b45a-da07125e4109-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zhfv7\" (UID: \"216c9e89-3d03-460f-b45a-da07125e4109\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.579037 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxh5t\" (UniqueName: \"kubernetes.io/projected/8e53767d-5052-4220-9645-b8d6d433a7df-kube-api-access-fxh5t\") pod \"console-f9d7485db-js54s\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.582826 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b43a5c93-cc27-4971-8cbc-453a421990c6-etcd-client\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.583850 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a51e701-99f5-423c-a413-464a283751f4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8nh9j\" (UID: \"0a51e701-99f5-423c-a413-464a283751f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.584051 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bd7655e-5c79-4963-bb2a-c8b55f131a39-serving-cert\") pod \"service-ca-operator-777779d784-54d99\" (UID: \"1bd7655e-5c79-4963-bb2a-c8b55f131a39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.597441 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/934c0aac-761d-43af-b40b-34e6d135b9fe-node-bootstrap-token\") pod \"machine-config-server-ptmbv\" (UID: \"934c0aac-761d-43af-b40b-34e6d135b9fe\") " pod="openshift-machine-config-operator/machine-config-server-ptmbv" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.606674 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xjz\" (UniqueName: \"kubernetes.io/projected/4fc5c4c1-71e4-47c6-96f5-62e69c97cb63-kube-api-access-67xjz\") pod \"migrator-59844c95c7-fpclt\" (UID: \"4fc5c4c1-71e4-47c6-96f5-62e69c97cb63\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fpclt" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.609585 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh"] Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.609684 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8jxpl" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.621991 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: E1202 20:14:24.624114 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:25.124096555 +0000 UTC m=+148.127472089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.633945 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x28m2\" (UniqueName: \"kubernetes.io/projected/3c7bea3f-e468-4655-b241-ae19d336c6c0-kube-api-access-x28m2\") pod \"control-plane-machine-set-operator-78cbb6b69f-8qqvs\" (UID: \"3c7bea3f-e468-4655-b241-ae19d336c6c0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.645754 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2hmv\" (UniqueName: \"kubernetes.io/projected/fd1a5c31-4bd7-4634-994e-d3681eaae556-kube-api-access-r2hmv\") pod \"ingress-canary-mfxpp\" (UID: \"fd1a5c31-4bd7-4634-994e-d3681eaae556\") " pod="openshift-ingress-canary/ingress-canary-mfxpp" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.657595 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv7f8\" (UniqueName: \"kubernetes.io/projected/74234e67-0f4f-4706-95a6-e04ac3ec94a5-kube-api-access-nv7f8\") pod \"service-ca-9c57cc56f-cjf62\" (UID: \"74234e67-0f4f-4706-95a6-e04ac3ec94a5\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.674452 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.681485 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txzpl\" (UniqueName: \"kubernetes.io/projected/934c0aac-761d-43af-b40b-34e6d135b9fe-kube-api-access-txzpl\") pod \"machine-config-server-ptmbv\" (UID: \"934c0aac-761d-43af-b40b-34e6d135b9fe\") " pod="openshift-machine-config-operator/machine-config-server-ptmbv" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.681689 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:24 crc kubenswrapper[4796]: W1202 20:14:24.709659 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d7de865_36cb_4d66_b97f_4e73717ea4a9.slice/crio-bf99d8c5da44b67dfe6214c7c0e2ba078deba2a301d3c5355370847d26db2392 WatchSource:0}: Error finding container bf99d8c5da44b67dfe6214c7c0e2ba078deba2a301d3c5355370847d26db2392: Status 404 returned error can't find the container with id bf99d8c5da44b67dfe6214c7c0e2ba078deba2a301d3c5355370847d26db2392 Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.713614 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsjqz\" (UniqueName: \"kubernetes.io/projected/b273d88f-06a4-4c2d-9a6d-339954a14abd-kube-api-access-hsjqz\") pod \"machine-config-operator-74547568cd-bfwqk\" (UID: \"b273d88f-06a4-4c2d-9a6d-339954a14abd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.722967 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:24 crc kubenswrapper[4796]: E1202 20:14:24.724501 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:25.224482636 +0000 UTC m=+148.227858170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.734830 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2gdh\" (UniqueName: \"kubernetes.io/projected/216c9e89-3d03-460f-b45a-da07125e4109-kube-api-access-s2gdh\") pod \"olm-operator-6b444d44fb-zhfv7\" (UID: \"216c9e89-3d03-460f-b45a-da07125e4109\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.744922 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-r5ddg" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.756675 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mlwv\" (UniqueName: \"kubernetes.io/projected/298ad988-5006-4eb8-8f3b-72bc50af51a6-kube-api-access-2mlwv\") pod \"csi-hostpathplugin-tjstr\" (UID: \"298ad988-5006-4eb8-8f3b-72bc50af51a6\") " pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.761197 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trzmf\" (UniqueName: \"kubernetes.io/projected/1bd7655e-5c79-4963-bb2a-c8b55f131a39-kube-api-access-trzmf\") pod \"service-ca-operator-777779d784-54d99\" (UID: \"1bd7655e-5c79-4963-bb2a-c8b55f131a39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.767522 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.774195 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fpclt" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.780979 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.782138 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mkff\" (UniqueName: \"kubernetes.io/projected/8849ec8b-1b79-4779-ae4c-56143ff6a938-kube-api-access-8mkff\") pod \"package-server-manager-789f6589d5-fqd2f\" (UID: \"8849ec8b-1b79-4779-ae4c-56143ff6a938\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.790161 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.800768 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/161618bd-f935-4923-95bb-70d4cfd6eb1f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c6b87\" (UID: \"161618bd-f935-4923-95bb-70d4cfd6eb1f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.824601 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.833657 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.833316 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4br7\" (UniqueName: \"kubernetes.io/projected/99b7235a-d9b2-489c-b437-1aac26d04cc1-kube-api-access-x4br7\") pod \"packageserver-d55dfcdfc-rlp96\" (UID: \"99b7235a-d9b2-489c-b437-1aac26d04cc1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.838063 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:24 crc kubenswrapper[4796]: E1202 20:14:24.839498 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:25.339479609 +0000 UTC m=+148.342855143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.845203 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5rzg\" (UniqueName: \"kubernetes.io/projected/0a51e701-99f5-423c-a413-464a283751f4-kube-api-access-l5rzg\") pod \"marketplace-operator-79b997595-8nh9j\" (UID: \"0a51e701-99f5-423c-a413-464a283751f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.854741 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.863604 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mfxpp" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.875655 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ptmbv" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.876635 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn55s\" (UniqueName: \"kubernetes.io/projected/8eadd752-dac7-4cc8-8e44-11f8b3f35c74-kube-api-access-zn55s\") pod \"dns-default-slnp6\" (UID: \"8eadd752-dac7-4cc8-8e44-11f8b3f35c74\") " pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.901516 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tjstr" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.904699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d6gl\" (UniqueName: \"kubernetes.io/projected/bfed3adf-1892-4da3-8ffc-f4033036a4ca-kube-api-access-2d6gl\") pod \"collect-profiles-29411760-zzp4z\" (UID: \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.918215 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wt8x\" (UniqueName: \"kubernetes.io/projected/b43a5c93-cc27-4971-8cbc-453a421990c6-kube-api-access-2wt8x\") pod \"etcd-operator-b45778765-sfw6z\" (UID: \"b43a5c93-cc27-4971-8cbc-453a421990c6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:24 crc kubenswrapper[4796]: I1202 20:14:24.939074 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:24 crc kubenswrapper[4796]: E1202 20:14:24.939844 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:25.439819609 +0000 UTC m=+148.443195133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.047868 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:25 crc kubenswrapper[4796]: E1202 20:14:25.048717 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:25.548703714 +0000 UTC m=+148.552079248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.065730 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n" event={"ID":"2deb1633-420a-4e19-bf12-0c853eb1da21","Type":"ContainerStarted","Data":"eb3f3e3a2903e9704e7eec656cdece4bf8675404e123e0d85e467d1df8aaf2de"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.065785 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n" event={"ID":"2deb1633-420a-4e19-bf12-0c853eb1da21","Type":"ContainerStarted","Data":"8c23d41c35fcf4a59ef152873aac472ba6c35d8f73a4f4159b1ebc700077a973"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.065836 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.068920 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9"] Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.074072 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" event={"ID":"92175868-fbbe-4efc-824d-9af2cd73b853","Type":"ContainerStarted","Data":"d3bab075864e5dfa4237a0c86b25bc8749fcee20ff526aeaf4be8023c03f15e0"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.081141 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg"] Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.099024 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.108542 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" event={"ID":"8d7de865-36cb-4d66-b97f-4e73717ea4a9","Type":"ContainerStarted","Data":"bf99d8c5da44b67dfe6214c7c0e2ba078deba2a301d3c5355370847d26db2392"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.108815 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.114246 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kgf8g"] Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.115498 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.124612 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8132a45aec6aaa425fea1f17bc1f5eb6a93fbf7f43cb827e851eaac6b72ca2bb"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.124661 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4dbb3b0d4d84497dc4df67bb9ad87cddd34732865ce7df1815e1896644e837e9"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.124981 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.134655 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hn5qp" event={"ID":"9775f308-f5d8-4bc7-bfc0-00f065833a55","Type":"ContainerStarted","Data":"85d89177910a3ddae1a2a336d9c6c086c6ac971ab50d644364e9b2d5f883362a"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.134701 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hn5qp" event={"ID":"9775f308-f5d8-4bc7-bfc0-00f065833a55","Type":"ContainerStarted","Data":"7c61eebf9bae757c3bae69e189e984bc5027c7434ca7f036bb5c12c1315cb953"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.140784 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a8ad9f3f8eaa2298bd862f825e9bd2525f3e4c3ac61e87b21099052fa0df6393"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.140824 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5bd0b6614337908f12a67e9e3d4062dfd2db06a109ff5bfde902b66dd655e37b"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.142385 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.143126 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2fc513a6162806987013e3c469442566d5b915185a7047bd5bb05b1d1e064315"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.143158 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a02366c1885a74605879d8cb84f9e7ce714fc94aada2c3aa5a614db804cd8200"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.149999 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:25 crc kubenswrapper[4796]: E1202 20:14:25.150415 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:25.650399866 +0000 UTC m=+148.653775400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.155588 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" event={"ID":"a3c8abe1-552e-404c-be1f-88f30e467d8f","Type":"ContainerStarted","Data":"9aeb65988071ca8766b793519ea21349d480a71b9630327503905cda58366127"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.156287 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.159003 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mvqcs" event={"ID":"3ceb7b1c-b699-46ed-9571-0dd1fe3dce69","Type":"ContainerStarted","Data":"aefb9ef5707730854e87a1bf385f2b315562af04cab514486bea4e31bc269811"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.166858 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.168165 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" event={"ID":"f87b13cb-58d7-402e-b4cc-0206862019dd","Type":"ContainerStarted","Data":"b16d4788c69bdbc05f8b26db11f945c1f60eaa90b24e1c13fa5d92dc1dafd797"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.168195 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" event={"ID":"f87b13cb-58d7-402e-b4cc-0206862019dd","Type":"ContainerStarted","Data":"bebd5aab279e67bdceef284d5724a5261feead09b350e971970f867415106d2f"} Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.170490 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.191807 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.191873 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.251526 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:25 crc kubenswrapper[4796]: E1202 20:14:25.254292 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:25.75426911 +0000 UTC m=+148.757644644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.359040 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:25 crc kubenswrapper[4796]: E1202 20:14:25.377109 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:25.877075624 +0000 UTC m=+148.880451158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.464117 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:25 crc kubenswrapper[4796]: E1202 20:14:25.464590 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:25.964571181 +0000 UTC m=+148.967946715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.567016 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:25 crc kubenswrapper[4796]: E1202 20:14:25.567665 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:26.067642057 +0000 UTC m=+149.071017581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.669808 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:25 crc kubenswrapper[4796]: E1202 20:14:25.670575 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:26.170559029 +0000 UTC m=+149.173934563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.675358 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" podStartSLOduration=126.675333254 podStartE2EDuration="2m6.675333254s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:25.672575137 +0000 UTC m=+148.675950681" watchObservedRunningTime="2025-12-02 20:14:25.675333254 +0000 UTC m=+148.678708788" Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.743480 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmzq8" podStartSLOduration=126.743446113 podStartE2EDuration="2m6.743446113s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:25.703775572 +0000 UTC m=+148.707151106" watchObservedRunningTime="2025-12-02 20:14:25.743446113 +0000 UTC m=+148.746821647" Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.782034 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:25 crc kubenswrapper[4796]: E1202 20:14:25.782489 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:26.282470297 +0000 UTC m=+149.285845821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.885761 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:25 crc kubenswrapper[4796]: E1202 20:14:25.886176 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:26.386165608 +0000 UTC m=+149.389541142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:25 crc kubenswrapper[4796]: I1202 20:14:25.987105 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:25 crc kubenswrapper[4796]: E1202 20:14:25.987599 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:26.487573983 +0000 UTC m=+149.490949527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.056915 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hn5qp" podStartSLOduration=127.05688749 podStartE2EDuration="2m7.05688749s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:26.050879455 +0000 UTC m=+149.054254999" watchObservedRunningTime="2025-12-02 20:14:26.05688749 +0000 UTC m=+149.060263024" Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.088732 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:26 crc kubenswrapper[4796]: E1202 20:14:26.089214 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:26.589195723 +0000 UTC m=+149.592571257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.115886 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.177596 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ptmbv" event={"ID":"934c0aac-761d-43af-b40b-34e6d135b9fe","Type":"ContainerStarted","Data":"3296ad6a9b5f0b06e0427f13d8259cbe78c3d42e54301c1075b09c1c38c1c4d8"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.177671 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ptmbv" event={"ID":"934c0aac-761d-43af-b40b-34e6d135b9fe","Type":"ContainerStarted","Data":"a6f385f111c53d3b2e9673c00b676c3d283b2f65a83b18ea9bf7b800bb95c461"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.180643 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" event={"ID":"92175868-fbbe-4efc-824d-9af2cd73b853","Type":"ContainerStarted","Data":"2fe6a31caf4c1f82b6b512d2c98da53d015da7747fa52e329ca0533d8a74193b"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.180675 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" event={"ID":"92175868-fbbe-4efc-824d-9af2cd73b853","Type":"ContainerStarted","Data":"59280a15531f7c1c41d1714bd4114e692830e67a230a35275beb0879c539752b"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.183505 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n" event={"ID":"2deb1633-420a-4e19-bf12-0c853eb1da21","Type":"ContainerStarted","Data":"51e806056fb3402627be390f3e987f333dd78f2f8542aaa29ae69f7499720fba"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.185647 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" event={"ID":"32ceedc0-31dc-4816-bf1c-30b63b08e98a","Type":"ContainerStarted","Data":"462172e9d8dc233b6c44340d2d47f6cb790d83ce163a2eb7b8580e7e0075b723"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.185851 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" event={"ID":"32ceedc0-31dc-4816-bf1c-30b63b08e98a","Type":"ContainerStarted","Data":"1e3f7274e34a7c1967aa6bf9047597befaa7faf0f032fbbb5e647d6a4c350332"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.188530 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mvqcs" event={"ID":"3ceb7b1c-b699-46ed-9571-0dd1fe3dce69","Type":"ContainerStarted","Data":"95a9de4547ceb6b48ccb6a06c5deeaa66779e0724d531f7b7c32557ff7414146"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.190819 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:26 crc kubenswrapper[4796]: E1202 20:14:26.191160 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:26.69113552 +0000 UTC m=+149.694511054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.191524 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" event={"ID":"8d7de865-36cb-4d66-b97f-4e73717ea4a9","Type":"ContainerStarted","Data":"482cab42254f2bc72e0de59db171b2451ca1c2c985d529a4e8607dede303c558"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.191566 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" event={"ID":"8d7de865-36cb-4d66-b97f-4e73717ea4a9","Type":"ContainerStarted","Data":"f31bcf41db453889961c38147c5e0709a4b015776603cdc9ed64a6a6dd7122bf"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.197703 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" event={"ID":"9525db23-79d5-4936-b9a3-7af09b979ad6","Type":"ContainerStarted","Data":"ee71c22a58b0210bb8d4367e6cb652c539a7d6219ef88fee6da421d35a981e20"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.197747 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" event={"ID":"9525db23-79d5-4936-b9a3-7af09b979ad6","Type":"ContainerStarted","Data":"f143485158bc444966df1552080d9ed8f1a7f7b8da1d1a45e0379c73f4efba06"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.201060 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" event={"ID":"27bc5a48-7598-4ca1-a0ce-e08d8a694a13","Type":"ContainerStarted","Data":"bafa938775278872aa38345143fcda7c91ea8fe9401aaf7056606493ce1825ac"} Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.293446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:26 crc kubenswrapper[4796]: E1202 20:14:26.299128 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:26.799061863 +0000 UTC m=+149.802437397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.395471 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:26 crc kubenswrapper[4796]: E1202 20:14:26.395936 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:26.895913788 +0000 UTC m=+149.899289322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.423514 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:26 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:26 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:26 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.423586 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.497768 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:26 crc kubenswrapper[4796]: E1202 20:14:26.498105 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:26.998092622 +0000 UTC m=+150.001468146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.521143 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s22mc" podStartSLOduration=127.521119259 podStartE2EDuration="2m7.521119259s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:26.519779006 +0000 UTC m=+149.523154540" watchObservedRunningTime="2025-12-02 20:14:26.521119259 +0000 UTC m=+149.524494793" Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.599788 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:26 crc kubenswrapper[4796]: E1202 20:14:26.599898 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:27.099882065 +0000 UTC m=+150.103257599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.600075 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:26 crc kubenswrapper[4796]: E1202 20:14:26.600437 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:27.100430449 +0000 UTC m=+150.103805983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.620306 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kgf8g" podStartSLOduration=127.620283999 podStartE2EDuration="2m7.620283999s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:26.577430252 +0000 UTC m=+149.580805786" watchObservedRunningTime="2025-12-02 20:14:26.620283999 +0000 UTC m=+149.623659533" Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.667821 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ctm9n" podStartSLOduration=127.66780269 podStartE2EDuration="2m7.66780269s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:26.622026131 +0000 UTC m=+149.625401655" watchObservedRunningTime="2025-12-02 20:14:26.66780269 +0000 UTC m=+149.671178224" Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.702733 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:26 crc kubenswrapper[4796]: E1202 20:14:26.703223 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:27.203207417 +0000 UTC m=+150.206582951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.714783 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bprtg" podStartSLOduration=127.714764646 podStartE2EDuration="2m7.714764646s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:26.669955671 +0000 UTC m=+149.673331205" watchObservedRunningTime="2025-12-02 20:14:26.714764646 +0000 UTC m=+149.718140180" Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.719534 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mvqcs" podStartSLOduration=127.719520432 podStartE2EDuration="2m7.719520432s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:26.711884526 +0000 UTC m=+149.715260060" watchObservedRunningTime="2025-12-02 20:14:26.719520432 +0000 UTC m=+149.722895966" Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.741788 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ptmbv" podStartSLOduration=5.741769651 podStartE2EDuration="5.741769651s" podCreationTimestamp="2025-12-02 20:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:26.737653071 +0000 UTC m=+149.741028605" watchObservedRunningTime="2025-12-02 20:14:26.741769651 +0000 UTC m=+149.745145185" Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.750831 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58"] Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.762617 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r"] Dec 02 20:14:26 crc kubenswrapper[4796]: W1202 20:14:26.770489 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda986dc8a_d877_4144_9ed4_6e30cff96787.slice/crio-428783d5d980fe94139ce244b754e957f668e6e6b59173a73ce3983ca1264892 WatchSource:0}: Error finding container 428783d5d980fe94139ce244b754e957f668e6e6b59173a73ce3983ca1264892: Status 404 returned error can't find the container with id 428783d5d980fe94139ce244b754e957f668e6e6b59173a73ce3983ca1264892 Dec 02 20:14:26 crc kubenswrapper[4796]: W1202 20:14:26.774924 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38aa3029_8a07_415a_9da2_9a0dca76fdb0.slice/crio-452835e6495e206ee116b3bf8201fd0fc5cc6fe1453a3bde7f5f29c0fc4de173 WatchSource:0}: Error finding container 452835e6495e206ee116b3bf8201fd0fc5cc6fe1453a3bde7f5f29c0fc4de173: Status 404 returned error can't find the container with id 452835e6495e206ee116b3bf8201fd0fc5cc6fe1453a3bde7f5f29c0fc4de173 Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.784827 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hj4jh" podStartSLOduration=127.784810492 podStartE2EDuration="2m7.784810492s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:26.776832359 +0000 UTC m=+149.780207893" watchObservedRunningTime="2025-12-02 20:14:26.784810492 +0000 UTC m=+149.788186026" Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.804729 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:26 crc kubenswrapper[4796]: E1202 20:14:26.805001 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:27.304989771 +0000 UTC m=+150.308365305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.816003 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6tx4j"] Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.906997 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:26 crc kubenswrapper[4796]: E1202 20:14:26.907394 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:27.407375949 +0000 UTC m=+150.410751483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:26 crc kubenswrapper[4796]: I1202 20:14:26.931924 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p"] Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.010467 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:27 crc kubenswrapper[4796]: E1202 20:14:27.010758 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:27.510746292 +0000 UTC m=+150.514121826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.078325 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p6qqx"] Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.115945 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:27 crc kubenswrapper[4796]: E1202 20:14:27.116413 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:27.616396189 +0000 UTC m=+150.619771713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.128301 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8"] Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.131511 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc"] Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.141220 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rmbh4"] Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.151108 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:27 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:27 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:27 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.151191 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.227332 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:27 crc kubenswrapper[4796]: E1202 20:14:27.227643 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:27.727632102 +0000 UTC m=+150.731007636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:27 crc kubenswrapper[4796]: W1202 20:14:27.243421 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc77d403a_df91_47fe_bc4d_486dd5f7142f.slice/crio-af3b8df999191cb022a6cca0e48e5ba5ce03f720cdc7139e5885cf0496d48c4a WatchSource:0}: Error finding container af3b8df999191cb022a6cca0e48e5ba5ce03f720cdc7139e5885cf0496d48c4a: Status 404 returned error can't find the container with id af3b8df999191cb022a6cca0e48e5ba5ce03f720cdc7139e5885cf0496d48c4a Dec 02 20:14:27 crc kubenswrapper[4796]: W1202 20:14:27.287735 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode01c8d0d_e590_4e8f_96f0_85b5d7f29275.slice/crio-60a6bca59b9fd8453382e0803f1331bd2902f59c6e26957caf9efc1e72f3e9fe WatchSource:0}: Error finding container 60a6bca59b9fd8453382e0803f1331bd2902f59c6e26957caf9efc1e72f3e9fe: Status 404 returned error can't find the container with id 60a6bca59b9fd8453382e0803f1331bd2902f59c6e26957caf9efc1e72f3e9fe Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.334750 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:27 crc kubenswrapper[4796]: E1202 20:14:27.336299 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:27.836280692 +0000 UTC m=+150.839656226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.379363 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" event={"ID":"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a","Type":"ContainerStarted","Data":"6ec9116cb0cbbf1042c623cfa3662afb3caea1bab55666ce37c4e1b42aba5b16"} Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.388822 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7"] Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.388875 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" event={"ID":"d39a70e8-5d53-431a-8413-bbb2041fc8dd","Type":"ContainerStarted","Data":"661cf9c9e20cc34fc83383b90bbc61d63b7f159fea2daeede1974a5a8347d247"} Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.424574 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" event={"ID":"21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8","Type":"ContainerStarted","Data":"4653a2034d729f610f1fcf18b9b2702790422f00b4cc300d9af79645f9c5fd9b"} Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.431889 4796 generic.go:334] "Generic (PLEG): container finished" podID="27bc5a48-7598-4ca1-a0ce-e08d8a694a13" containerID="bb93d6ef7e32c6c73d76b38d6923d9a1c0a264fa380c3907a50204f07d86c6ca" exitCode=0 Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.431964 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" event={"ID":"27bc5a48-7598-4ca1-a0ce-e08d8a694a13","Type":"ContainerDied","Data":"bb93d6ef7e32c6c73d76b38d6923d9a1c0a264fa380c3907a50204f07d86c6ca"} Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.440191 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:27 crc kubenswrapper[4796]: E1202 20:14:27.440566 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:27.940553597 +0000 UTC m=+150.943929121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.454229 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk"] Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.457151 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" event={"ID":"a986dc8a-d877-4144-9ed4-6e30cff96787","Type":"ContainerStarted","Data":"7abdc545916a1d9212042eab2508aea347eb0da104afb7fe8e23bb766eaca53f"} Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.457191 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" event={"ID":"a986dc8a-d877-4144-9ed4-6e30cff96787","Type":"ContainerStarted","Data":"428783d5d980fe94139ce244b754e957f668e6e6b59173a73ce3983ca1264892"} Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.457563 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.458962 4796 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pmh58 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.459004 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" podUID="a986dc8a-d877-4144-9ed4-6e30cff96787" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.459952 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" event={"ID":"38aa3029-8a07-415a-9da2-9a0dca76fdb0","Type":"ContainerStarted","Data":"452835e6495e206ee116b3bf8201fd0fc5cc6fe1453a3bde7f5f29c0fc4de173"} Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.473946 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj"] Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.480893 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8jxpl"] Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.490073 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-slnp6"] Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.497373 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f"] Dec 02 20:14:27 crc kubenswrapper[4796]: W1202 20:14:27.505946 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb273d88f_06a4_4c2d_9a6d_339954a14abd.slice/crio-bd67cf3c3e80f1c9fcd4143d153305b75dfd5407fba46eec6d153f5ea471b311 WatchSource:0}: Error finding container bd67cf3c3e80f1c9fcd4143d153305b75dfd5407fba46eec6d153f5ea471b311: Status 404 returned error can't find the container with id bd67cf3c3e80f1c9fcd4143d153305b75dfd5407fba46eec6d153f5ea471b311 Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.540857 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:27 crc kubenswrapper[4796]: E1202 20:14:27.544474 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:28.044456052 +0000 UTC m=+151.047831586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.546387 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc"] Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.643307 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:27 crc kubenswrapper[4796]: E1202 20:14:27.644039 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:28.144028522 +0000 UTC m=+151.147404056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.700186 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" podStartSLOduration=128.700164281 podStartE2EDuration="2m8.700164281s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:27.697593089 +0000 UTC m=+150.700968623" watchObservedRunningTime="2025-12-02 20:14:27.700164281 +0000 UTC m=+150.703539815" Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.745136 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:27 crc kubenswrapper[4796]: E1202 20:14:27.745392 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:28.245331144 +0000 UTC m=+151.248706678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.745618 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:27 crc kubenswrapper[4796]: E1202 20:14:27.746151 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:28.246137494 +0000 UTC m=+151.249513028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.846688 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:27 crc kubenswrapper[4796]: E1202 20:14:27.846915 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:28.346872183 +0000 UTC m=+151.350247727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.846962 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:27 crc kubenswrapper[4796]: E1202 20:14:27.847406 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:28.347386235 +0000 UTC m=+151.350761769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:27 crc kubenswrapper[4796]: I1202 20:14:27.948446 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:27 crc kubenswrapper[4796]: E1202 20:14:27.948765 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:28.448726329 +0000 UTC m=+151.452101863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.049879 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:28 crc kubenswrapper[4796]: E1202 20:14:28.050527 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:28.550501192 +0000 UTC m=+151.553876726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.135620 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:28 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:28 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:28 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.135700 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.139294 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.140986 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.145884 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fpclt"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.148039 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tjstr"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.151959 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:28 crc kubenswrapper[4796]: E1202 20:14:28.153460 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:28.653416544 +0000 UTC m=+151.656792078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.165481 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-r5ddg"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.175703 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8nh9j"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.201094 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjf62"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.205489 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ztcsx"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.211290 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sfw6z"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.213768 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.226754 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-js54s"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.233711 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-54d99"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.240387 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.240474 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mfxpp"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.244274 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87"] Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.254961 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:28 crc kubenswrapper[4796]: E1202 20:14:28.255369 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:28.755357142 +0000 UTC m=+151.758732676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.356310 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:28 crc kubenswrapper[4796]: E1202 20:14:28.357089 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:28.857063263 +0000 UTC m=+151.860438797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.460297 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:28 crc kubenswrapper[4796]: E1202 20:14:28.460661 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:28.960648721 +0000 UTC m=+151.964024255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.476365 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ztcsx" event={"ID":"5e268554-349b-41e2-a9ec-64b6de541ace","Type":"ContainerStarted","Data":"0a25fb1d80a12a4d0fc453f6a14ae32580cca2de8395da5b11da92b8168c23d3"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.479205 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" event={"ID":"c77d403a-df91-47fe-bc4d-486dd5f7142f","Type":"ContainerStarted","Data":"d56a417833d631e75db91b66ce354f02afb568798fd217804370db377134cc57"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.479269 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" event={"ID":"c77d403a-df91-47fe-bc4d-486dd5f7142f","Type":"ContainerStarted","Data":"af3b8df999191cb022a6cca0e48e5ba5ce03f720cdc7139e5885cf0496d48c4a"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.486206 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" event={"ID":"b43a5c93-cc27-4971-8cbc-453a421990c6","Type":"ContainerStarted","Data":"5cad324ca56b71f0a838620d44f1c9c86089eeeeadca4129792cde6b737234ba"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.493154 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" event={"ID":"99b7235a-d9b2-489c-b437-1aac26d04cc1","Type":"ContainerStarted","Data":"bfb0a354a016c25513fb9bd32a05911befcdf5a8e1dd5b60d0d63c36e4aa1963"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.505799 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" event={"ID":"8849ec8b-1b79-4779-ae4c-56143ff6a938","Type":"ContainerStarted","Data":"81aba7a108760697701ed76c9fce06690f1272ee7be67bb762ff999d37ab8dd6"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.509687 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gcsc8" podStartSLOduration=129.509659117 podStartE2EDuration="2m9.509659117s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:28.503378166 +0000 UTC m=+151.506753700" watchObservedRunningTime="2025-12-02 20:14:28.509659117 +0000 UTC m=+151.513034651" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.516471 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" event={"ID":"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a","Type":"ContainerStarted","Data":"8706d1aa0defeef134dc75c899cf5629faf51919d5f8ec944f5552275a2e4a63"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.523751 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" event={"ID":"bfed3adf-1892-4da3-8ffc-f4033036a4ca","Type":"ContainerStarted","Data":"ed3eb6878e5bd50a532a14789f653d997dd2217f2f8c0371c7d32bc8f3ebb9b6"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.526803 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" event={"ID":"1bd7655e-5c79-4963-bb2a-c8b55f131a39","Type":"ContainerStarted","Data":"b44e4883a7a10d335a19ee30f8d849883bcb62f0adb369c1f39ba144d4a66f31"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.534674 4796 generic.go:334] "Generic (PLEG): container finished" podID="21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8" containerID="e4bde0f7f26d0f6b478918d829cb1f3841fd9c85f53dee1587cd96a5f83c2968" exitCode=0 Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.535019 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" event={"ID":"21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8","Type":"ContainerDied","Data":"e4bde0f7f26d0f6b478918d829cb1f3841fd9c85f53dee1587cd96a5f83c2968"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.535146 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.552136 4796 generic.go:334] "Generic (PLEG): container finished" podID="e01c8d0d-e590-4e8f-96f0-85b5d7f29275" containerID="9b94272443281790662442c5cb3658544e244a9eb866cd8fdb88a68732aec04c" exitCode=0 Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.552704 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" event={"ID":"e01c8d0d-e590-4e8f-96f0-85b5d7f29275","Type":"ContainerDied","Data":"9b94272443281790662442c5cb3658544e244a9eb866cd8fdb88a68732aec04c"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.552781 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" event={"ID":"e01c8d0d-e590-4e8f-96f0-85b5d7f29275","Type":"ContainerStarted","Data":"60a6bca59b9fd8453382e0803f1331bd2902f59c6e26957caf9efc1e72f3e9fe"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.557231 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" podStartSLOduration=129.557211319 podStartE2EDuration="2m9.557211319s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:28.555973848 +0000 UTC m=+151.559349382" watchObservedRunningTime="2025-12-02 20:14:28.557211319 +0000 UTC m=+151.560586853" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.562316 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:28 crc kubenswrapper[4796]: E1202 20:14:28.564374 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.064341841 +0000 UTC m=+152.067717375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.574645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tjstr" event={"ID":"298ad988-5006-4eb8-8f3b-72bc50af51a6","Type":"ContainerStarted","Data":"e5dce3a24b3dfb5d79d9664c95171018c9a245b1fc050eebfc4bf6279f570ad4"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.580414 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" event={"ID":"b273d88f-06a4-4c2d-9a6d-339954a14abd","Type":"ContainerStarted","Data":"bd67cf3c3e80f1c9fcd4143d153305b75dfd5407fba46eec6d153f5ea471b311"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.612242 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" event={"ID":"d39a70e8-5d53-431a-8413-bbb2041fc8dd","Type":"ContainerStarted","Data":"72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.612758 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.621694 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" event={"ID":"216c9e89-3d03-460f-b45a-da07125e4109","Type":"ContainerStarted","Data":"4b290cc540fa721ed4632328e58fda68cb25eec12d46a6d0c91248cba09fb332"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.621748 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" event={"ID":"216c9e89-3d03-460f-b45a-da07125e4109","Type":"ContainerStarted","Data":"de8aa3dd42a5a2e75ab320951addec8e729e4d72f68cbf3d340323652ded48c5"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.622084 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.626238 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8jxpl" event={"ID":"e716e73a-1881-44ca-8af0-fb9defd9645b","Type":"ContainerStarted","Data":"e4ee61ffe4c5146c17bd90f415b61912d4b5c457413185bec39bc160bc9b169a"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.626621 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8jxpl" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.636692 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-8jxpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.636722 4796 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zhfv7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.636763 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8jxpl" podUID="e716e73a-1881-44ca-8af0-fb9defd9645b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.636805 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" podUID="216c9e89-3d03-460f-b45a-da07125e4109" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.639911 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs" event={"ID":"3c7bea3f-e468-4655-b241-ae19d336c6c0","Type":"ContainerStarted","Data":"4f8ae089be760588ae5bdf77be79039b1de2d650ab1fbdde8cc1b1a6739bc235"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.639906 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" podStartSLOduration=129.6398875 podStartE2EDuration="2m9.6398875s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:28.637113363 +0000 UTC m=+151.640488897" watchObservedRunningTime="2025-12-02 20:14:28.6398875 +0000 UTC m=+151.643263024" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.649005 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-js54s" event={"ID":"8e53767d-5052-4220-9645-b8d6d433a7df","Type":"ContainerStarted","Data":"6f6c72f9c32b8e4fad7af62772284c4b9b4a993f673735275a3efa30ed1553f7"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.661810 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" event={"ID":"03e312db-cfac-4e22-811b-a00d1b0a2902","Type":"ContainerStarted","Data":"249447ee5b04aab172de7844d26775bca336c987c67d369455fb745f68767979"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.662931 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-8jxpl" podStartSLOduration=129.662908358 podStartE2EDuration="2m9.662908358s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:28.658003108 +0000 UTC m=+151.661378652" watchObservedRunningTime="2025-12-02 20:14:28.662908358 +0000 UTC m=+151.666283882" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.665386 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:28 crc kubenswrapper[4796]: E1202 20:14:28.666892 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.166877904 +0000 UTC m=+152.170253428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.673861 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-slnp6" event={"ID":"8eadd752-dac7-4cc8-8e44-11f8b3f35c74","Type":"ContainerStarted","Data":"bcc89fb747d1aae30352b902965c9316dc9508a034638964032ef970f2717384"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.694459 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" event={"ID":"bbd52c61-04b7-424a-88c0-71653fd8d65e","Type":"ContainerStarted","Data":"8bdac55ad65b8982500776fb4946ecb291ceaccc904212f0824a6b5c7984283a"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.699941 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mfxpp" event={"ID":"fd1a5c31-4bd7-4634-994e-d3681eaae556","Type":"ContainerStarted","Data":"47124f081862069cb1b5013d2e25d527338088f76d6647612338568ed2e68dab"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.732731 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" event={"ID":"161618bd-f935-4923-95bb-70d4cfd6eb1f","Type":"ContainerStarted","Data":"1abc443513499ea94dee269cf8d86183f3872add24600cc515c7b97bcbddd11e"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.738766 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" event={"ID":"ed72b389-0d73-447b-946b-374b82b9d3eb","Type":"ContainerStarted","Data":"a6a02e88c6bfcd27187f024dbcd24415cc5f49c7168d7c5f345b276fd00db4fe"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.747350 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fpclt" event={"ID":"4fc5c4c1-71e4-47c6-96f5-62e69c97cb63","Type":"ContainerStarted","Data":"ee13f7ced16a6a3e1297e2d743ed266d230f5914edb8f31af91705929c8c9795"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.755882 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-r5ddg" event={"ID":"518ae85f-81af-4700-bb54-f6bdae637b1c","Type":"ContainerStarted","Data":"792364e164e4c622955c35541062f5ecc9d8c8da66829aad709be1eb42a756b7"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.756751 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" podStartSLOduration=129.756733809 podStartE2EDuration="2m9.756733809s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:28.756170365 +0000 UTC m=+151.759545899" watchObservedRunningTime="2025-12-02 20:14:28.756733809 +0000 UTC m=+151.760109343" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.757011 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" podStartSLOduration=129.757006346 podStartE2EDuration="2m9.757006346s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:28.681215781 +0000 UTC m=+151.684591315" watchObservedRunningTime="2025-12-02 20:14:28.757006346 +0000 UTC m=+151.760381880" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.768059 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:28 crc kubenswrapper[4796]: E1202 20:14:28.768266 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.268232127 +0000 UTC m=+152.271607651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.768458 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:28 crc kubenswrapper[4796]: E1202 20:14:28.769181 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.26917071 +0000 UTC m=+152.272546244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.785406 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" event={"ID":"74234e67-0f4f-4706-95a6-e04ac3ec94a5","Type":"ContainerStarted","Data":"9a69168009d03800b9d1e504b54e53c99bff06e1401ffb9b2e5b3e6dd451e685"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.818479 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" event={"ID":"38aa3029-8a07-415a-9da2-9a0dca76fdb0","Type":"ContainerStarted","Data":"07fa46ccb2084c9e436302c051945f55e85434ddb3b639e9febb56b90b4c2115"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.827034 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" event={"ID":"0a51e701-99f5-423c-a413-464a283751f4","Type":"ContainerStarted","Data":"9ecafb69e45a9072ec92df2f56ec20d6092c525532d382d4ec59a17442beba8b"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.831281 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" event={"ID":"f8f831a3-1114-43e1-b10b-d5b31905aa9e","Type":"ContainerStarted","Data":"300f7eac4bde9831fe538c26c40f02222e33dd07ed702092adcbaed62b5a253c"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.831313 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" event={"ID":"f8f831a3-1114-43e1-b10b-d5b31905aa9e","Type":"ContainerStarted","Data":"d319ef11619d9fd0e4d8dd79bdf8a7fbe2b40247be0bda4e25799051860fb1aa"} Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.832763 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.838981 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pmh58" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.842690 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nq9r" podStartSLOduration=129.842651099 podStartE2EDuration="2m9.842651099s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:28.837380831 +0000 UTC m=+151.840756365" watchObservedRunningTime="2025-12-02 20:14:28.842651099 +0000 UTC m=+151.846026623" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.871976 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:28 crc kubenswrapper[4796]: E1202 20:14:28.874358 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.374330816 +0000 UTC m=+152.377706350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.909619 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qfgsc" podStartSLOduration=129.9096039 podStartE2EDuration="2m9.9096039s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:28.907997581 +0000 UTC m=+151.911373125" watchObservedRunningTime="2025-12-02 20:14:28.9096039 +0000 UTC m=+151.912979424" Dec 02 20:14:28 crc kubenswrapper[4796]: I1202 20:14:28.974170 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:28 crc kubenswrapper[4796]: E1202 20:14:28.976063 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.476044418 +0000 UTC m=+152.479419952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.080966 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.082195 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.582167937 +0000 UTC m=+152.585543471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.082669 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.083160 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.583143811 +0000 UTC m=+152.586519345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.130466 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:29 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:29 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:29 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.131193 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.184284 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.184489 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.684458503 +0000 UTC m=+152.687834037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.184676 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.184950 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.684937965 +0000 UTC m=+152.688313499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.286307 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.286553 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.786509344 +0000 UTC m=+152.789884878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.287187 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.287733 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.787711573 +0000 UTC m=+152.791087107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.389575 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.390315 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.890300916 +0000 UTC m=+152.893676450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.492299 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.496114 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:29.996096698 +0000 UTC m=+152.999472232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.593894 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.594418 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:30.094397567 +0000 UTC m=+153.097773101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.705695 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.707001 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:30.206967472 +0000 UTC m=+153.210343046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.754690 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n27mr"] Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.756168 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.760772 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.806941 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.807281 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:30.307228799 +0000 UTC m=+153.310604333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.807456 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.807944 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:30.307937457 +0000 UTC m=+153.311312991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.850056 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n27mr"] Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.909425 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.909771 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbmx7\" (UniqueName: \"kubernetes.io/projected/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-kube-api-access-pbmx7\") pod \"community-operators-n27mr\" (UID: \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\") " pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.909803 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-utilities\") pod \"community-operators-n27mr\" (UID: \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\") " pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.909840 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-catalog-content\") pod \"community-operators-n27mr\" (UID: \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\") " pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:14:29 crc kubenswrapper[4796]: E1202 20:14:29.909964 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:30.409938246 +0000 UTC m=+153.413313780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.912036 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" event={"ID":"03e312db-cfac-4e22-811b-a00d1b0a2902","Type":"ContainerStarted","Data":"323e7e8d1ad8c283ce78300a6aa83ef20598ce210aeb527987eb329962c7dd68"} Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.912071 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" event={"ID":"03e312db-cfac-4e22-811b-a00d1b0a2902","Type":"ContainerStarted","Data":"e725afa929a7c7bfea40b51b6043bb2088a6b02101a2c9207b99559805f7f511"} Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.933028 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5wq72"] Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.934154 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.939941 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8vbxj" podStartSLOduration=130.939917102 podStartE2EDuration="2m10.939917102s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:29.938661471 +0000 UTC m=+152.942037005" watchObservedRunningTime="2025-12-02 20:14:29.939917102 +0000 UTC m=+152.943292636" Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.942868 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.949785 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wq72"] Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.980831 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" event={"ID":"27bc5a48-7598-4ca1-a0ce-e08d8a694a13","Type":"ContainerStarted","Data":"38a9d40492b6794309e5ba1e9b559723ecf87cc057d4fa4cff1fe2300216e0fd"} Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.987133 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" event={"ID":"0a51e701-99f5-423c-a413-464a283751f4","Type":"ContainerStarted","Data":"785f840a9f462359a0e84183160b5eb7724828dd93b41968a4543c2815e71dbb"} Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.987862 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.995501 4796 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8nh9j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Dec 02 20:14:29 crc kubenswrapper[4796]: I1202 20:14:29.995681 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" podUID="0a51e701-99f5-423c-a413-464a283751f4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.006143 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mfxpp" event={"ID":"fd1a5c31-4bd7-4634-994e-d3681eaae556","Type":"ContainerStarted","Data":"be9956add184f5eea22c2ca35eca2b544e726a20359c2516d832b233fba4bdb2"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.010620 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-utilities\") pod \"community-operators-n27mr\" (UID: \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\") " pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.010657 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.010699 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-catalog-content\") pod \"community-operators-n27mr\" (UID: \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\") " pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.010735 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6q4l\" (UniqueName: \"kubernetes.io/projected/22a52028-b443-4287-80e1-dfcffb2ba07e-kube-api-access-w6q4l\") pod \"certified-operators-5wq72\" (UID: \"22a52028-b443-4287-80e1-dfcffb2ba07e\") " pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.010775 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22a52028-b443-4287-80e1-dfcffb2ba07e-catalog-content\") pod \"certified-operators-5wq72\" (UID: \"22a52028-b443-4287-80e1-dfcffb2ba07e\") " pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.010810 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22a52028-b443-4287-80e1-dfcffb2ba07e-utilities\") pod \"certified-operators-5wq72\" (UID: \"22a52028-b443-4287-80e1-dfcffb2ba07e\") " pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.010860 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbmx7\" (UniqueName: \"kubernetes.io/projected/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-kube-api-access-pbmx7\") pod \"community-operators-n27mr\" (UID: \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\") " pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.011574 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-utilities\") pod \"community-operators-n27mr\" (UID: \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\") " pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:14:30 crc kubenswrapper[4796]: E1202 20:14:30.011879 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:30.511864533 +0000 UTC m=+153.515240057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.012114 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-catalog-content\") pod \"community-operators-n27mr\" (UID: \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\") " pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.032751 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" event={"ID":"161618bd-f935-4923-95bb-70d4cfd6eb1f","Type":"ContainerStarted","Data":"2f643a48802bec4a1d7d2255392adab42ac6922d45324e62e7ddb01cec45b6c4"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.062699 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-r5ddg" event={"ID":"518ae85f-81af-4700-bb54-f6bdae637b1c","Type":"ContainerStarted","Data":"6e4d4fa7db36f7b918c4964f6d2f7f8ad345d72d7cf2068b0ed840def9097940"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.071042 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbmx7\" (UniqueName: \"kubernetes.io/projected/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-kube-api-access-pbmx7\") pod \"community-operators-n27mr\" (UID: \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\") " pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.072953 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8jxpl" event={"ID":"e716e73a-1881-44ca-8af0-fb9defd9645b","Type":"ContainerStarted","Data":"19dcfa0501ea774361933144a4b2166e935a7dfea641d597474510572e85650d"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.077474 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-8jxpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.077541 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8jxpl" podUID="e716e73a-1881-44ca-8af0-fb9defd9645b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.082596 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs" event={"ID":"3c7bea3f-e468-4655-b241-ae19d336c6c0","Type":"ContainerStarted","Data":"e7cd5f101cd67953393b0c67d99b4a71d7583e45f1eaab8e4a3bbfef005afb7f"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.086650 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" event={"ID":"b43a5c93-cc27-4971-8cbc-453a421990c6","Type":"ContainerStarted","Data":"6be6fff12fd58eb572619387ac7ec7c9dda35aeca517541e80873a9f303a889b"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.090694 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.097501 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" event={"ID":"b273d88f-06a4-4c2d-9a6d-339954a14abd","Type":"ContainerStarted","Data":"1338a073220716a51803addac174d17b10cee144fc084a941dad03f9a273a3f7"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.097548 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" event={"ID":"b273d88f-06a4-4c2d-9a6d-339954a14abd","Type":"ContainerStarted","Data":"07a6336591b63240eef112ddbe51ede8d4e90c7df31e30a7a5a11aa846c487e9"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.106644 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" podStartSLOduration=131.106616727 podStartE2EDuration="2m11.106616727s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.105955501 +0000 UTC m=+153.109331065" watchObservedRunningTime="2025-12-02 20:14:30.106616727 +0000 UTC m=+153.109992261" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.106897 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" podStartSLOduration=131.106890704 podStartE2EDuration="2m11.106890704s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.067144692 +0000 UTC m=+153.070520226" watchObservedRunningTime="2025-12-02 20:14:30.106890704 +0000 UTC m=+153.110266238" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.113102 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:30 crc kubenswrapper[4796]: E1202 20:14:30.113482 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:30.613453053 +0000 UTC m=+153.616828587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.113688 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6q4l\" (UniqueName: \"kubernetes.io/projected/22a52028-b443-4287-80e1-dfcffb2ba07e-kube-api-access-w6q4l\") pod \"certified-operators-5wq72\" (UID: \"22a52028-b443-4287-80e1-dfcffb2ba07e\") " pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.113959 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22a52028-b443-4287-80e1-dfcffb2ba07e-catalog-content\") pod \"certified-operators-5wq72\" (UID: \"22a52028-b443-4287-80e1-dfcffb2ba07e\") " pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.114010 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22a52028-b443-4287-80e1-dfcffb2ba07e-utilities\") pod \"certified-operators-5wq72\" (UID: \"22a52028-b443-4287-80e1-dfcffb2ba07e\") " pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.116106 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22a52028-b443-4287-80e1-dfcffb2ba07e-catalog-content\") pod \"certified-operators-5wq72\" (UID: \"22a52028-b443-4287-80e1-dfcffb2ba07e\") " pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.116199 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22a52028-b443-4287-80e1-dfcffb2ba07e-utilities\") pod \"certified-operators-5wq72\" (UID: \"22a52028-b443-4287-80e1-dfcffb2ba07e\") " pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.130667 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:30 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:30 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:30 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.130732 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.151143 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" event={"ID":"21dab2f8-0cdd-4cd9-b3ed-aa0d4bce10e8","Type":"ContainerStarted","Data":"01ebc49eee8d3c874766c9bc4f324fb2d0e888b4b9e87e23d59be2dc103d2d5d"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.151547 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-65fxg"] Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.152508 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.154274 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" event={"ID":"99b7235a-d9b2-489c-b437-1aac26d04cc1","Type":"ContainerStarted","Data":"0300926c0e197b59e806422df795b8d10bb7ac8819f5316a09645e35f34c7a57"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.155146 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.166969 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bfwqk" podStartSLOduration=131.166944478 podStartE2EDuration="2m11.166944478s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.15218474 +0000 UTC m=+153.155560274" watchObservedRunningTime="2025-12-02 20:14:30.166944478 +0000 UTC m=+153.170320002" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.178958 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6q4l\" (UniqueName: \"kubernetes.io/projected/22a52028-b443-4287-80e1-dfcffb2ba07e-kube-api-access-w6q4l\") pod \"certified-operators-5wq72\" (UID: \"22a52028-b443-4287-80e1-dfcffb2ba07e\") " pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.179624 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-65fxg"] Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.188927 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" event={"ID":"1bd7655e-5c79-4963-bb2a-c8b55f131a39","Type":"ContainerStarted","Data":"380d95efbd18ba45c5f13b84941c4d938f032c1e322765c41b969e661c121f6a"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.196616 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ztcsx" event={"ID":"5e268554-349b-41e2-a9ec-64b6de541ace","Type":"ContainerStarted","Data":"b7375ba6289ddce8cc1f2e1cf14523e32c26fd80390465afe8d3ac4dd28abf77"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.198895 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.198976 4796 patch_prober.go:28] interesting pod/console-operator-58897d9998-ztcsx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.199018 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ztcsx" podUID="5e268554-349b-41e2-a9ec-64b6de541ace" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.220575 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdk6p\" (UniqueName: \"kubernetes.io/projected/88577409-7021-4e24-852b-4d8f6d0c512a-kube-api-access-rdk6p\") pod \"community-operators-65fxg\" (UID: \"88577409-7021-4e24-852b-4d8f6d0c512a\") " pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.220664 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88577409-7021-4e24-852b-4d8f6d0c512a-catalog-content\") pod \"community-operators-65fxg\" (UID: \"88577409-7021-4e24-852b-4d8f6d0c512a\") " pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.220715 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88577409-7021-4e24-852b-4d8f6d0c512a-utilities\") pod \"community-operators-65fxg\" (UID: \"88577409-7021-4e24-852b-4d8f6d0c512a\") " pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.220753 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:30 crc kubenswrapper[4796]: E1202 20:14:30.230347 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:30.730324062 +0000 UTC m=+153.733699596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.238192 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fpclt" event={"ID":"4fc5c4c1-71e4-47c6-96f5-62e69c97cb63","Type":"ContainerStarted","Data":"d9864e7183d61df238c660efb270ef9ab143788182601a366640845f2fb8b702"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.254028 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sfw6z" podStartSLOduration=131.254007035 podStartE2EDuration="2m11.254007035s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.253039891 +0000 UTC m=+153.256415425" watchObservedRunningTime="2025-12-02 20:14:30.254007035 +0000 UTC m=+153.257382569" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.254484 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c6b87" podStartSLOduration=131.254477876 podStartE2EDuration="2m11.254477876s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.203430131 +0000 UTC m=+153.206805685" watchObservedRunningTime="2025-12-02 20:14:30.254477876 +0000 UTC m=+153.257853410" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.293152 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" event={"ID":"e01c8d0d-e590-4e8f-96f0-85b5d7f29275","Type":"ContainerStarted","Data":"c880f68a8dceaa6f212c9d689e06602518628c808894105423b7a36f91c15a2d"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.295407 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-mfxpp" podStartSLOduration=9.295394957 podStartE2EDuration="9.295394957s" podCreationTimestamp="2025-12-02 20:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.293240655 +0000 UTC m=+153.296616189" watchObservedRunningTime="2025-12-02 20:14:30.295394957 +0000 UTC m=+153.298770491" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.303978 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" event={"ID":"bfed3adf-1892-4da3-8ffc-f4033036a4ca","Type":"ContainerStarted","Data":"315836e3c4f18ebca59f4b0bc725aa52dbf8449c8292a10fac6b44dc47598897"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.305781 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sxkvc" event={"ID":"ed72b389-0d73-447b-946b-374b82b9d3eb","Type":"ContainerStarted","Data":"01375dacb0d71d99cf331e7c8ead9ad76bcd1a00468ed24e8e8e6961cbe3ca4f"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.307205 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" event={"ID":"bbd52c61-04b7-424a-88c0-71653fd8d65e","Type":"ContainerStarted","Data":"d6919fb9bb8a86dce7c2d99588e036e2cf5312f25b24ae8535d5a67bbe422f4f"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.307956 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.311423 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" event={"ID":"74234e67-0f4f-4706-95a6-e04ac3ec94a5","Type":"ContainerStarted","Data":"bf8c224b60833483f608b40b42dd65e8b13c63901770a88ca764b715003a0583"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.314588 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.325989 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.326311 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdk6p\" (UniqueName: \"kubernetes.io/projected/88577409-7021-4e24-852b-4d8f6d0c512a-kube-api-access-rdk6p\") pod \"community-operators-65fxg\" (UID: \"88577409-7021-4e24-852b-4d8f6d0c512a\") " pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.326352 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88577409-7021-4e24-852b-4d8f6d0c512a-catalog-content\") pod \"community-operators-65fxg\" (UID: \"88577409-7021-4e24-852b-4d8f6d0c512a\") " pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.326373 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88577409-7021-4e24-852b-4d8f6d0c512a-utilities\") pod \"community-operators-65fxg\" (UID: \"88577409-7021-4e24-852b-4d8f6d0c512a\") " pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.330082 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:14:30 crc kubenswrapper[4796]: E1202 20:14:30.330594 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:30.830569499 +0000 UTC m=+153.833945033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.332194 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88577409-7021-4e24-852b-4d8f6d0c512a-catalog-content\") pod \"community-operators-65fxg\" (UID: \"88577409-7021-4e24-852b-4d8f6d0c512a\") " pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.332489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88577409-7021-4e24-852b-4d8f6d0c512a-utilities\") pod \"community-operators-65fxg\" (UID: \"88577409-7021-4e24-852b-4d8f6d0c512a\") " pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.343244 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qqvs" podStartSLOduration=131.343224445 podStartE2EDuration="2m11.343224445s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.325329732 +0000 UTC m=+153.328705266" watchObservedRunningTime="2025-12-02 20:14:30.343224445 +0000 UTC m=+153.346599979" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.345798 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kgxxm"] Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.346972 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.370413 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kgxxm"] Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.380430 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdk6p\" (UniqueName: \"kubernetes.io/projected/88577409-7021-4e24-852b-4d8f6d0c512a-kube-api-access-rdk6p\") pod \"community-operators-65fxg\" (UID: \"88577409-7021-4e24-852b-4d8f6d0c512a\") " pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.380559 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" event={"ID":"8849ec8b-1b79-4779-ae4c-56143ff6a938","Type":"ContainerStarted","Data":"7da75475f725d0dbcf6a294bced240d14ad92705336999350dabd8cec349d341"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.380622 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" event={"ID":"8849ec8b-1b79-4779-ae4c-56143ff6a938","Type":"ContainerStarted","Data":"d0f8ee298a181129ecdb257145b0da8a5fa07c912dba96bcd8c5a1893cddc3f2"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.381273 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.424302 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" podStartSLOduration=131.424274477 podStartE2EDuration="2m11.424274477s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.391783601 +0000 UTC m=+153.395159135" watchObservedRunningTime="2025-12-02 20:14:30.424274477 +0000 UTC m=+153.427650011" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.425650 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" event={"ID":"f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a","Type":"ContainerStarted","Data":"92e24c4253e95c9a6d2d3e261939365631ea2b5f30cd894dbaadcff4c21c42a8"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.430209 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.430304 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/839017d2-1f6e-4b5b-93ec-80175eabe5f8-kube-api-access-5cdlt\") pod \"certified-operators-kgxxm\" (UID: \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\") " pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.430400 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/839017d2-1f6e-4b5b-93ec-80175eabe5f8-utilities\") pod \"certified-operators-kgxxm\" (UID: \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\") " pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.430424 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/839017d2-1f6e-4b5b-93ec-80175eabe5f8-catalog-content\") pod \"certified-operators-kgxxm\" (UID: \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\") " pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:14:30 crc kubenswrapper[4796]: E1202 20:14:30.433093 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:30.93307948 +0000 UTC m=+153.936455014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.466291 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fpclt" podStartSLOduration=131.466269614 podStartE2EDuration="2m11.466269614s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.424981154 +0000 UTC m=+153.428356678" watchObservedRunningTime="2025-12-02 20:14:30.466269614 +0000 UTC m=+153.469645148" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.467692 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-js54s" event={"ID":"8e53767d-5052-4220-9645-b8d6d433a7df","Type":"ContainerStarted","Data":"f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.468758 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" podStartSLOduration=131.468751193 podStartE2EDuration="2m11.468751193s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.466902469 +0000 UTC m=+153.470278003" watchObservedRunningTime="2025-12-02 20:14:30.468751193 +0000 UTC m=+153.472126717" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.495346 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.501151 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-54d99" podStartSLOduration=131.501115618 podStartE2EDuration="2m11.501115618s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.499069037 +0000 UTC m=+153.502444571" watchObservedRunningTime="2025-12-02 20:14:30.501115618 +0000 UTC m=+153.504491152" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.523368 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-slnp6" event={"ID":"8eadd752-dac7-4cc8-8e44-11f8b3f35c74","Type":"ContainerStarted","Data":"426cdb40d82b512a1f859ae60ddbd1a88df72b346b6a0e1d05ea792a66a1ae39"} Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.523507 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.532696 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.532924 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/839017d2-1f6e-4b5b-93ec-80175eabe5f8-utilities\") pod \"certified-operators-kgxxm\" (UID: \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\") " pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.532953 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/839017d2-1f6e-4b5b-93ec-80175eabe5f8-catalog-content\") pod \"certified-operators-kgxxm\" (UID: \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\") " pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.533344 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/839017d2-1f6e-4b5b-93ec-80175eabe5f8-kube-api-access-5cdlt\") pod \"certified-operators-kgxxm\" (UID: \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\") " pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:14:30 crc kubenswrapper[4796]: E1202 20:14:30.533895 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:31.03387828 +0000 UTC m=+154.037253814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.534572 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/839017d2-1f6e-4b5b-93ec-80175eabe5f8-utilities\") pod \"certified-operators-kgxxm\" (UID: \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\") " pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.534989 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/839017d2-1f6e-4b5b-93ec-80175eabe5f8-catalog-content\") pod \"certified-operators-kgxxm\" (UID: \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\") " pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.535688 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zhfv7" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.562440 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/839017d2-1f6e-4b5b-93ec-80175eabe5f8-kube-api-access-5cdlt\") pod \"certified-operators-kgxxm\" (UID: \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\") " pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.580338 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cjf62" podStartSLOduration=131.580316704 podStartE2EDuration="2m11.580316704s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.545624924 +0000 UTC m=+153.549000458" watchObservedRunningTime="2025-12-02 20:14:30.580316704 +0000 UTC m=+153.583692238" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.581226 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ztcsx" podStartSLOduration=131.581221417 podStartE2EDuration="2m11.581221417s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.579639488 +0000 UTC m=+153.583015012" watchObservedRunningTime="2025-12-02 20:14:30.581221417 +0000 UTC m=+153.584596951" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.611142 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" podStartSLOduration=131.61112448 podStartE2EDuration="2m11.61112448s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.608993319 +0000 UTC m=+153.612368853" watchObservedRunningTime="2025-12-02 20:14:30.61112448 +0000 UTC m=+153.614500014" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.636655 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:30 crc kubenswrapper[4796]: E1202 20:14:30.643904 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:31.143872983 +0000 UTC m=+154.147248517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.688473 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" podStartSLOduration=131.688455732 podStartE2EDuration="2m11.688455732s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.685586373 +0000 UTC m=+153.688961907" watchObservedRunningTime="2025-12-02 20:14:30.688455732 +0000 UTC m=+153.691831266" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.726688 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.734491 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-p6qqx" podStartSLOduration=131.734470516 podStartE2EDuration="2m11.734470516s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.73175144 +0000 UTC m=+153.735126974" watchObservedRunningTime="2025-12-02 20:14:30.734470516 +0000 UTC m=+153.737846040" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.738837 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:30 crc kubenswrapper[4796]: E1202 20:14:30.739202 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:31.239185561 +0000 UTC m=+154.242561095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.792690 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-js54s" podStartSLOduration=131.792658015 podStartE2EDuration="2m11.792658015s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.791706082 +0000 UTC m=+153.795081616" watchObservedRunningTime="2025-12-02 20:14:30.792658015 +0000 UTC m=+153.796033549" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.842191 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:30 crc kubenswrapper[4796]: E1202 20:14:30.842600 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:31.342587423 +0000 UTC m=+154.345962957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.882359 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-slnp6" podStartSLOduration=9.882341676 podStartE2EDuration="9.882341676s" podCreationTimestamp="2025-12-02 20:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:30.881331801 +0000 UTC m=+153.884707335" watchObservedRunningTime="2025-12-02 20:14:30.882341676 +0000 UTC m=+153.885717210" Dec 02 20:14:30 crc kubenswrapper[4796]: I1202 20:14:30.944737 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:30 crc kubenswrapper[4796]: E1202 20:14:30.945102 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:31.445082505 +0000 UTC m=+154.448458039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.046555 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:31 crc kubenswrapper[4796]: E1202 20:14:31.046965 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:31.54694303 +0000 UTC m=+154.550318564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.124096 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:31 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:31 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:31 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.124176 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.159700 4796 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rlp96 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.159792 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" podUID="99b7235a-d9b2-489c-b437-1aac26d04cc1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.160359 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:31 crc kubenswrapper[4796]: E1202 20:14:31.160810 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:31.660761596 +0000 UTC m=+154.664137140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.160925 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:31 crc kubenswrapper[4796]: E1202 20:14:31.161354 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:31.66134083 +0000 UTC m=+154.664716364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.262771 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:31 crc kubenswrapper[4796]: E1202 20:14:31.263204 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:31.763189196 +0000 UTC m=+154.766564720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.364098 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:31 crc kubenswrapper[4796]: E1202 20:14:31.364423 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:31.864409316 +0000 UTC m=+154.867784840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.377037 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n27mr"] Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.467055 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:31 crc kubenswrapper[4796]: E1202 20:14:31.467495 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:31.967457421 +0000 UTC m=+154.970832945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.467733 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:31 crc kubenswrapper[4796]: E1202 20:14:31.468217 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:31.968201428 +0000 UTC m=+154.971576962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.495460 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wq72"] Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.568882 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:31 crc kubenswrapper[4796]: E1202 20:14:31.569365 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:32.069349118 +0000 UTC m=+155.072724652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.570464 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-r5ddg" event={"ID":"518ae85f-81af-4700-bb54-f6bdae637b1c","Type":"ContainerStarted","Data":"61cc032969aebadceb1932a421a41b2ec1f9b7fb31630ad36caad5abb042781d"} Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.588483 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n27mr" event={"ID":"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8","Type":"ContainerStarted","Data":"389224d57d2d585e685f8f18f8ba28df8bffb91e86cd2602d8f11fd763ba4937"} Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.605450 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fpclt" event={"ID":"4fc5c4c1-71e4-47c6-96f5-62e69c97cb63","Type":"ContainerStarted","Data":"66446a6853e812ff95e74e2ec14a8071f50f5e6c6eb4de379c7e8db1d57bcf43"} Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.637602 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-slnp6" event={"ID":"8eadd752-dac7-4cc8-8e44-11f8b3f35c74","Type":"ContainerStarted","Data":"4514e3acc31268955f5ad3740751607f3540918854640150c66450a2c12db8c2"} Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.667225 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tjstr" event={"ID":"298ad988-5006-4eb8-8f3b-72bc50af51a6","Type":"ContainerStarted","Data":"d286095c529df59daf3b482b3b500e0f77436654a76e2384076589d1912f3c01"} Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.671446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:31 crc kubenswrapper[4796]: E1202 20:14:31.676532 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:32.176516461 +0000 UTC m=+155.179891995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.697063 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-r5ddg" podStartSLOduration=132.697042389 podStartE2EDuration="2m12.697042389s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:31.656107448 +0000 UTC m=+154.659482982" watchObservedRunningTime="2025-12-02 20:14:31.697042389 +0000 UTC m=+154.700417923" Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.697679 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wq72" event={"ID":"22a52028-b443-4287-80e1-dfcffb2ba07e","Type":"ContainerStarted","Data":"378d7b0e2b6a9091601b851c19e7e22f0c9ab36751c60ebdacfb60dd764b112f"} Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.700077 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-65fxg"] Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.726924 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" event={"ID":"e01c8d0d-e590-4e8f-96f0-85b5d7f29275","Type":"ContainerStarted","Data":"48a9f93fedcf1f49f9a6d0bda18ab4f62c4516a0b0382fa12467a46c2f282108"} Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.729386 4796 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8nh9j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.729437 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" podUID="0a51e701-99f5-423c-a413-464a283751f4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.730221 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-8jxpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.730320 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8jxpl" podUID="e716e73a-1881-44ca-8af0-fb9defd9645b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.739388 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jfb6p" Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.740631 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rlp96" Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.772936 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" podStartSLOduration=132.772909015 podStartE2EDuration="2m12.772909015s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:31.767426092 +0000 UTC m=+154.770801626" watchObservedRunningTime="2025-12-02 20:14:31.772909015 +0000 UTC m=+154.776284539" Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.774226 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.775074 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ztcsx" Dec 02 20:14:31 crc kubenswrapper[4796]: E1202 20:14:31.779633 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:32.279608657 +0000 UTC m=+155.282984191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.780961 4796 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.871931 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kgxxm"] Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.882689 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:31 crc kubenswrapper[4796]: E1202 20:14:31.907786 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:32.407769459 +0000 UTC m=+155.411144983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.973869 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-26bqf"] Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.975776 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:14:31 crc kubenswrapper[4796]: I1202 20:14:31.985576 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.001538 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.001662 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26bqf"] Dec 02 20:14:32 crc kubenswrapper[4796]: E1202 20:14:32.002075 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:32.502059593 +0000 UTC m=+155.505435127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.110353 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f262dee-3028-4aa8-8ab3-8e4777368da0-utilities\") pod \"redhat-marketplace-26bqf\" (UID: \"2f262dee-3028-4aa8-8ab3-8e4777368da0\") " pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.110421 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.110453 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f262dee-3028-4aa8-8ab3-8e4777368da0-catalog-content\") pod \"redhat-marketplace-26bqf\" (UID: \"2f262dee-3028-4aa8-8ab3-8e4777368da0\") " pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.110471 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc62z\" (UniqueName: \"kubernetes.io/projected/2f262dee-3028-4aa8-8ab3-8e4777368da0-kube-api-access-cc62z\") pod \"redhat-marketplace-26bqf\" (UID: \"2f262dee-3028-4aa8-8ab3-8e4777368da0\") " pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:14:32 crc kubenswrapper[4796]: E1202 20:14:32.110900 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:32.610882976 +0000 UTC m=+155.614258510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.122950 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:32 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:32 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:32 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.123037 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.211495 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.212066 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f262dee-3028-4aa8-8ab3-8e4777368da0-catalog-content\") pod \"redhat-marketplace-26bqf\" (UID: \"2f262dee-3028-4aa8-8ab3-8e4777368da0\") " pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.212094 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc62z\" (UniqueName: \"kubernetes.io/projected/2f262dee-3028-4aa8-8ab3-8e4777368da0-kube-api-access-cc62z\") pod \"redhat-marketplace-26bqf\" (UID: \"2f262dee-3028-4aa8-8ab3-8e4777368da0\") " pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.212182 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f262dee-3028-4aa8-8ab3-8e4777368da0-utilities\") pod \"redhat-marketplace-26bqf\" (UID: \"2f262dee-3028-4aa8-8ab3-8e4777368da0\") " pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.213087 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f262dee-3028-4aa8-8ab3-8e4777368da0-utilities\") pod \"redhat-marketplace-26bqf\" (UID: \"2f262dee-3028-4aa8-8ab3-8e4777368da0\") " pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:14:32 crc kubenswrapper[4796]: E1202 20:14:32.213170 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:32.713154192 +0000 UTC m=+155.716529726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.213413 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f262dee-3028-4aa8-8ab3-8e4777368da0-catalog-content\") pod \"redhat-marketplace-26bqf\" (UID: \"2f262dee-3028-4aa8-8ab3-8e4777368da0\") " pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.251085 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc62z\" (UniqueName: \"kubernetes.io/projected/2f262dee-3028-4aa8-8ab3-8e4777368da0-kube-api-access-cc62z\") pod \"redhat-marketplace-26bqf\" (UID: \"2f262dee-3028-4aa8-8ab3-8e4777368da0\") " pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.313692 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:32 crc kubenswrapper[4796]: E1202 20:14:32.314204 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:32.814181039 +0000 UTC m=+155.817556743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.332415 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pgfr4"] Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.333620 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.334311 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.348755 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgfr4"] Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.414500 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:32 crc kubenswrapper[4796]: E1202 20:14:32.414582 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 20:14:32.914563998 +0000 UTC m=+155.917939532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.415104 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b44fde09-6008-44d4-999b-3ddf1c198ff7-utilities\") pod \"redhat-marketplace-pgfr4\" (UID: \"b44fde09-6008-44d4-999b-3ddf1c198ff7\") " pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.415180 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b44fde09-6008-44d4-999b-3ddf1c198ff7-catalog-content\") pod \"redhat-marketplace-pgfr4\" (UID: \"b44fde09-6008-44d4-999b-3ddf1c198ff7\") " pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.415272 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.415333 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tmr5\" (UniqueName: \"kubernetes.io/projected/b44fde09-6008-44d4-999b-3ddf1c198ff7-kube-api-access-4tmr5\") pod \"redhat-marketplace-pgfr4\" (UID: \"b44fde09-6008-44d4-999b-3ddf1c198ff7\") " pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:14:32 crc kubenswrapper[4796]: E1202 20:14:32.416468 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 20:14:32.916458895 +0000 UTC m=+155.919834429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rgp7h" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.429625 4796 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-02T20:14:31.780980861Z","Handler":null,"Name":""} Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.467099 4796 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.467136 4796 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.517131 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.517460 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tmr5\" (UniqueName: \"kubernetes.io/projected/b44fde09-6008-44d4-999b-3ddf1c198ff7-kube-api-access-4tmr5\") pod \"redhat-marketplace-pgfr4\" (UID: \"b44fde09-6008-44d4-999b-3ddf1c198ff7\") " pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.517504 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b44fde09-6008-44d4-999b-3ddf1c198ff7-utilities\") pod \"redhat-marketplace-pgfr4\" (UID: \"b44fde09-6008-44d4-999b-3ddf1c198ff7\") " pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.517589 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b44fde09-6008-44d4-999b-3ddf1c198ff7-catalog-content\") pod \"redhat-marketplace-pgfr4\" (UID: \"b44fde09-6008-44d4-999b-3ddf1c198ff7\") " pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.518110 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b44fde09-6008-44d4-999b-3ddf1c198ff7-catalog-content\") pod \"redhat-marketplace-pgfr4\" (UID: \"b44fde09-6008-44d4-999b-3ddf1c198ff7\") " pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.518420 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b44fde09-6008-44d4-999b-3ddf1c198ff7-utilities\") pod \"redhat-marketplace-pgfr4\" (UID: \"b44fde09-6008-44d4-999b-3ddf1c198ff7\") " pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.537515 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.553132 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tmr5\" (UniqueName: \"kubernetes.io/projected/b44fde09-6008-44d4-999b-3ddf1c198ff7-kube-api-access-4tmr5\") pod \"redhat-marketplace-pgfr4\" (UID: \"b44fde09-6008-44d4-999b-3ddf1c198ff7\") " pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.619534 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.650821 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.675470 4796 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.675517 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.688244 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26bqf"] Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.730859 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rgp7h\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.744122 4796 generic.go:334] "Generic (PLEG): container finished" podID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" containerID="cf797c3586aab2981ba21c79961f65975cb2a107df657035c85905a2e2ad0ce3" exitCode=0 Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.744192 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgxxm" event={"ID":"839017d2-1f6e-4b5b-93ec-80175eabe5f8","Type":"ContainerDied","Data":"cf797c3586aab2981ba21c79961f65975cb2a107df657035c85905a2e2ad0ce3"} Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.744221 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgxxm" event={"ID":"839017d2-1f6e-4b5b-93ec-80175eabe5f8","Type":"ContainerStarted","Data":"872a41c7aad584c15f2a6ec3692c56271ac003beccb701fb0d829abd819a9d81"} Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.747204 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.759588 4796 generic.go:334] "Generic (PLEG): container finished" podID="88577409-7021-4e24-852b-4d8f6d0c512a" containerID="d9035e4a79216ad12ee4a1704c868dd91d4297cc8b1867ef6cc25b21c102cd32" exitCode=0 Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.759692 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fxg" event={"ID":"88577409-7021-4e24-852b-4d8f6d0c512a","Type":"ContainerDied","Data":"d9035e4a79216ad12ee4a1704c868dd91d4297cc8b1867ef6cc25b21c102cd32"} Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.759727 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fxg" event={"ID":"88577409-7021-4e24-852b-4d8f6d0c512a","Type":"ContainerStarted","Data":"152f37cce76a1cf6dee323df9f1317fe4924bfad1a340f57859fb28f053c017a"} Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.767040 4796 generic.go:334] "Generic (PLEG): container finished" podID="5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" containerID="7047fa7fe67cacae8f4b81242971975c50db66f0092caa1dbc3283ff5e9568af" exitCode=0 Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.767639 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n27mr" event={"ID":"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8","Type":"ContainerDied","Data":"7047fa7fe67cacae8f4b81242971975c50db66f0092caa1dbc3283ff5e9568af"} Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.774589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26bqf" event={"ID":"2f262dee-3028-4aa8-8ab3-8e4777368da0","Type":"ContainerStarted","Data":"817f7c22743daffd1aa255b591fe02e8458ebbd949d1d064ee18770207a0483c"} Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.786307 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tjstr" event={"ID":"298ad988-5006-4eb8-8f3b-72bc50af51a6","Type":"ContainerStarted","Data":"229edb73d6500f8491eb5437aa4770dd7c1eea82c3f99361feb9811a4c47792a"} Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.786376 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tjstr" event={"ID":"298ad988-5006-4eb8-8f3b-72bc50af51a6","Type":"ContainerStarted","Data":"bd593b1e2a1f014575199acd3d2cd995798ccb906dd58d3d4fd955594b2c111b"} Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.790915 4796 generic.go:334] "Generic (PLEG): container finished" podID="22a52028-b443-4287-80e1-dfcffb2ba07e" containerID="679acf89474aafaeb00440a2a756196a3257dd2b5b9a3efc2373d136e30b80a3" exitCode=0 Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.791841 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wq72" event={"ID":"22a52028-b443-4287-80e1-dfcffb2ba07e","Type":"ContainerDied","Data":"679acf89474aafaeb00440a2a756196a3257dd2b5b9a3efc2373d136e30b80a3"} Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.821289 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.915429 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.916442 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.940300 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.957981 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.958005 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.992280 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cz22v"] Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.995234 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cz22v"] Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.995727 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:14:32 crc kubenswrapper[4796]: I1202 20:14:32.998496 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.028858 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4295f6-2375-491c-96fc-925c4bc60233-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9f4295f6-2375-491c-96fc-925c4bc60233\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.030701 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4295f6-2375-491c-96fc-925c4bc60233-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9f4295f6-2375-491c-96fc-925c4bc60233\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.109510 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgfr4"] Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.139730 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:33 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:33 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:33 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.139797 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.141918 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-catalog-content\") pod \"redhat-operators-cz22v\" (UID: \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\") " pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.141997 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-utilities\") pod \"redhat-operators-cz22v\" (UID: \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\") " pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.142025 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4295f6-2375-491c-96fc-925c4bc60233-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9f4295f6-2375-491c-96fc-925c4bc60233\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.142052 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4295f6-2375-491c-96fc-925c4bc60233-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9f4295f6-2375-491c-96fc-925c4bc60233\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.142110 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzv4k\" (UniqueName: \"kubernetes.io/projected/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-kube-api-access-mzv4k\") pod \"redhat-operators-cz22v\" (UID: \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\") " pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.142456 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4295f6-2375-491c-96fc-925c4bc60233-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9f4295f6-2375-491c-96fc-925c4bc60233\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.171559 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4295f6-2375-491c-96fc-925c4bc60233-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9f4295f6-2375-491c-96fc-925c4bc60233\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.225969 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rgp7h"] Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.249431 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzv4k\" (UniqueName: \"kubernetes.io/projected/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-kube-api-access-mzv4k\") pod \"redhat-operators-cz22v\" (UID: \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\") " pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.249908 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-catalog-content\") pod \"redhat-operators-cz22v\" (UID: \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\") " pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.249953 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-utilities\") pod \"redhat-operators-cz22v\" (UID: \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\") " pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.250607 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-utilities\") pod \"redhat-operators-cz22v\" (UID: \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\") " pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.250658 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-catalog-content\") pod \"redhat-operators-cz22v\" (UID: \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\") " pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.283538 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.287224 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzv4k\" (UniqueName: \"kubernetes.io/projected/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-kube-api-access-mzv4k\") pod \"redhat-operators-cz22v\" (UID: \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\") " pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.292616 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.321739 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.330804 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4ks8z"] Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.335110 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.348456 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ks8z"] Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.454754 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7www5\" (UniqueName: \"kubernetes.io/projected/b6bddb9c-450c-4804-a59f-b1b290a74b9a-kube-api-access-7www5\") pod \"redhat-operators-4ks8z\" (UID: \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\") " pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.454833 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6bddb9c-450c-4804-a59f-b1b290a74b9a-catalog-content\") pod \"redhat-operators-4ks8z\" (UID: \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\") " pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.454874 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6bddb9c-450c-4804-a59f-b1b290a74b9a-utilities\") pod \"redhat-operators-4ks8z\" (UID: \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\") " pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.556148 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6bddb9c-450c-4804-a59f-b1b290a74b9a-utilities\") pod \"redhat-operators-4ks8z\" (UID: \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\") " pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.556654 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7www5\" (UniqueName: \"kubernetes.io/projected/b6bddb9c-450c-4804-a59f-b1b290a74b9a-kube-api-access-7www5\") pod \"redhat-operators-4ks8z\" (UID: \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\") " pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.556703 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6bddb9c-450c-4804-a59f-b1b290a74b9a-catalog-content\") pod \"redhat-operators-4ks8z\" (UID: \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\") " pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.557189 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6bddb9c-450c-4804-a59f-b1b290a74b9a-catalog-content\") pod \"redhat-operators-4ks8z\" (UID: \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\") " pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.557409 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6bddb9c-450c-4804-a59f-b1b290a74b9a-utilities\") pod \"redhat-operators-4ks8z\" (UID: \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\") " pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.577046 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7www5\" (UniqueName: \"kubernetes.io/projected/b6bddb9c-450c-4804-a59f-b1b290a74b9a-kube-api-access-7www5\") pod \"redhat-operators-4ks8z\" (UID: \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\") " pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.662608 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.692132 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cz22v"] Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.736368 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 20:14:33 crc kubenswrapper[4796]: W1202 20:14:33.749836 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9f4295f6_2375_491c_96fc_925c4bc60233.slice/crio-2a52293fc4ce801c3667bab3ae4d498b8786058ea8848c258538a7df650e5886 WatchSource:0}: Error finding container 2a52293fc4ce801c3667bab3ae4d498b8786058ea8848c258538a7df650e5886: Status 404 returned error can't find the container with id 2a52293fc4ce801c3667bab3ae4d498b8786058ea8848c258538a7df650e5886 Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.800166 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cz22v" event={"ID":"ffd16a4b-7b78-4954-8ea4-317fdfcedb55","Type":"ContainerStarted","Data":"88eaa862d2772a0c57ea1a7c9dfffbf425542285db78868efc729a8d83acfe77"} Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.809294 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" event={"ID":"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d","Type":"ContainerStarted","Data":"3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a"} Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.809350 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" event={"ID":"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d","Type":"ContainerStarted","Data":"3a2a6cbc99a251e4ccd8eac099b82f5056437992e396d29600c2345826466f30"} Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.811217 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.818277 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9f4295f6-2375-491c-96fc-925c4bc60233","Type":"ContainerStarted","Data":"2a52293fc4ce801c3667bab3ae4d498b8786058ea8848c258538a7df650e5886"} Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.824685 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f262dee-3028-4aa8-8ab3-8e4777368da0" containerID="812c0e3acf6e8602f29a8ca752b9d153590ee7f229cc1a2b8c51e02ead8e23fa" exitCode=0 Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.824798 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26bqf" event={"ID":"2f262dee-3028-4aa8-8ab3-8e4777368da0","Type":"ContainerDied","Data":"812c0e3acf6e8602f29a8ca752b9d153590ee7f229cc1a2b8c51e02ead8e23fa"} Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.833391 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" podStartSLOduration=134.833367135 podStartE2EDuration="2m14.833367135s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:33.831455049 +0000 UTC m=+156.834830573" watchObservedRunningTime="2025-12-02 20:14:33.833367135 +0000 UTC m=+156.836742659" Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.842713 4796 generic.go:334] "Generic (PLEG): container finished" podID="bfed3adf-1892-4da3-8ffc-f4033036a4ca" containerID="315836e3c4f18ebca59f4b0bc725aa52dbf8449c8292a10fac6b44dc47598897" exitCode=0 Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.844390 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" event={"ID":"bfed3adf-1892-4da3-8ffc-f4033036a4ca","Type":"ContainerDied","Data":"315836e3c4f18ebca59f4b0bc725aa52dbf8449c8292a10fac6b44dc47598897"} Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.847434 4796 generic.go:334] "Generic (PLEG): container finished" podID="b44fde09-6008-44d4-999b-3ddf1c198ff7" containerID="520b245c86724a7eb7bdb4ab507fd8f0110b72a970ee833e4817feaff23b64d3" exitCode=0 Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.847682 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgfr4" event={"ID":"b44fde09-6008-44d4-999b-3ddf1c198ff7","Type":"ContainerDied","Data":"520b245c86724a7eb7bdb4ab507fd8f0110b72a970ee833e4817feaff23b64d3"} Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.847747 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgfr4" event={"ID":"b44fde09-6008-44d4-999b-3ddf1c198ff7","Type":"ContainerStarted","Data":"c06fbb962880b5fec9d3f5dd7d7a5ea98ec9da1236e97f01804c453f85a2485c"} Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.903505 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tjstr" event={"ID":"298ad988-5006-4eb8-8f3b-72bc50af51a6","Type":"ContainerStarted","Data":"02143ce6564220f583dd47134af3cf9bd5863309c477cd37c9ee4ac69cbfe9e1"} Dec 02 20:14:33 crc kubenswrapper[4796]: I1202 20:14:33.976018 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tjstr" podStartSLOduration=12.975991777 podStartE2EDuration="12.975991777s" podCreationTimestamp="2025-12-02 20:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:33.942567189 +0000 UTC m=+156.945942733" watchObservedRunningTime="2025-12-02 20:14:33.975991777 +0000 UTC m=+156.979367311" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.059813 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.059885 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.069530 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.115715 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.120752 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:34 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:34 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:34 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.120819 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.174932 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ks8z"] Dec 02 20:14:34 crc kubenswrapper[4796]: W1202 20:14:34.187889 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6bddb9c_450c_4804_a59f_b1b290a74b9a.slice/crio-d371a6d1f546479971e629824a87f3a9eba6111ebdabdd2a8804caa55c906b0f WatchSource:0}: Error finding container d371a6d1f546479971e629824a87f3a9eba6111ebdabdd2a8804caa55c906b0f: Status 404 returned error can't find the container with id d371a6d1f546479971e629824a87f3a9eba6111ebdabdd2a8804caa55c906b0f Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.209770 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.209935 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.215739 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.611358 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-8jxpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.611416 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8jxpl" podUID="e716e73a-1881-44ca-8af0-fb9defd9645b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.611537 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-8jxpl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.611606 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8jxpl" podUID="e716e73a-1881-44ca-8af0-fb9defd9645b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.683455 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.684772 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.685105 4796 patch_prober.go:28] interesting pod/console-f9d7485db-js54s container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.685146 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-js54s" podUID="8e53767d-5052-4220-9645-b8d6d433a7df" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.917167 4796 generic.go:334] "Generic (PLEG): container finished" podID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" containerID="2befcad334750966addc3cf37b9bb6cef720f033b405ed4ab930dccd01b42a95" exitCode=0 Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.917279 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cz22v" event={"ID":"ffd16a4b-7b78-4954-8ea4-317fdfcedb55","Type":"ContainerDied","Data":"2befcad334750966addc3cf37b9bb6cef720f033b405ed4ab930dccd01b42a95"} Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.929646 4796 generic.go:334] "Generic (PLEG): container finished" podID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" containerID="3f22a112488e33b80f8d4996fcb33c80fd2b82d87c9e75663f84940083569210" exitCode=0 Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.929774 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ks8z" event={"ID":"b6bddb9c-450c-4804-a59f-b1b290a74b9a","Type":"ContainerDied","Data":"3f22a112488e33b80f8d4996fcb33c80fd2b82d87c9e75663f84940083569210"} Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.929806 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ks8z" event={"ID":"b6bddb9c-450c-4804-a59f-b1b290a74b9a","Type":"ContainerStarted","Data":"d371a6d1f546479971e629824a87f3a9eba6111ebdabdd2a8804caa55c906b0f"} Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.946798 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9f4295f6-2375-491c-96fc-925c4bc60233","Type":"ContainerStarted","Data":"831405805a7740e65c5961c0210decfd17dbb6afbdad3ea9613e8a8cd4b2e118"} Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.953017 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-rmbh4" Dec 02 20:14:34 crc kubenswrapper[4796]: I1202 20:14:34.953585 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh2w9" Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.034894 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.034876562 podStartE2EDuration="3.034876562s" podCreationTimestamp="2025-12-02 20:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:14:35.032843492 +0000 UTC m=+158.036219026" watchObservedRunningTime="2025-12-02 20:14:35.034876562 +0000 UTC m=+158.038252096" Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.122811 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.127460 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:35 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:35 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:35 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.127546 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.574400 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.720560 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfed3adf-1892-4da3-8ffc-f4033036a4ca-config-volume\") pod \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\" (UID: \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\") " Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.720741 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d6gl\" (UniqueName: \"kubernetes.io/projected/bfed3adf-1892-4da3-8ffc-f4033036a4ca-kube-api-access-2d6gl\") pod \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\" (UID: \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\") " Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.720815 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfed3adf-1892-4da3-8ffc-f4033036a4ca-secret-volume\") pod \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\" (UID: \"bfed3adf-1892-4da3-8ffc-f4033036a4ca\") " Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.721418 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfed3adf-1892-4da3-8ffc-f4033036a4ca-config-volume" (OuterVolumeSpecName: "config-volume") pod "bfed3adf-1892-4da3-8ffc-f4033036a4ca" (UID: "bfed3adf-1892-4da3-8ffc-f4033036a4ca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.729369 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfed3adf-1892-4da3-8ffc-f4033036a4ca-kube-api-access-2d6gl" (OuterVolumeSpecName: "kube-api-access-2d6gl") pod "bfed3adf-1892-4da3-8ffc-f4033036a4ca" (UID: "bfed3adf-1892-4da3-8ffc-f4033036a4ca"). InnerVolumeSpecName "kube-api-access-2d6gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.730229 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfed3adf-1892-4da3-8ffc-f4033036a4ca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bfed3adf-1892-4da3-8ffc-f4033036a4ca" (UID: "bfed3adf-1892-4da3-8ffc-f4033036a4ca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.822385 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfed3adf-1892-4da3-8ffc-f4033036a4ca-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.822429 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d6gl\" (UniqueName: \"kubernetes.io/projected/bfed3adf-1892-4da3-8ffc-f4033036a4ca-kube-api-access-2d6gl\") on node \"crc\" DevicePath \"\"" Dec 02 20:14:35 crc kubenswrapper[4796]: I1202 20:14:35.822458 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfed3adf-1892-4da3-8ffc-f4033036a4ca-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 20:14:36 crc kubenswrapper[4796]: I1202 20:14:36.020929 4796 generic.go:334] "Generic (PLEG): container finished" podID="9f4295f6-2375-491c-96fc-925c4bc60233" containerID="831405805a7740e65c5961c0210decfd17dbb6afbdad3ea9613e8a8cd4b2e118" exitCode=0 Dec 02 20:14:36 crc kubenswrapper[4796]: I1202 20:14:36.021019 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9f4295f6-2375-491c-96fc-925c4bc60233","Type":"ContainerDied","Data":"831405805a7740e65c5961c0210decfd17dbb6afbdad3ea9613e8a8cd4b2e118"} Dec 02 20:14:36 crc kubenswrapper[4796]: I1202 20:14:36.027510 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" Dec 02 20:14:36 crc kubenswrapper[4796]: I1202 20:14:36.035877 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z" event={"ID":"bfed3adf-1892-4da3-8ffc-f4033036a4ca","Type":"ContainerDied","Data":"ed3eb6878e5bd50a532a14789f653d997dd2217f2f8c0371c7d32bc8f3ebb9b6"} Dec 02 20:14:36 crc kubenswrapper[4796]: I1202 20:14:36.035901 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed3eb6878e5bd50a532a14789f653d997dd2217f2f8c0371c7d32bc8f3ebb9b6" Dec 02 20:14:36 crc kubenswrapper[4796]: I1202 20:14:36.118402 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:36 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:36 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:36 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:36 crc kubenswrapper[4796]: I1202 20:14:36.118457 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:37 crc kubenswrapper[4796]: I1202 20:14:37.125068 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:37 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:37 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:37 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:37 crc kubenswrapper[4796]: I1202 20:14:37.125162 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:37 crc kubenswrapper[4796]: I1202 20:14:37.419767 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 20:14:37 crc kubenswrapper[4796]: I1202 20:14:37.560222 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4295f6-2375-491c-96fc-925c4bc60233-kubelet-dir\") pod \"9f4295f6-2375-491c-96fc-925c4bc60233\" (UID: \"9f4295f6-2375-491c-96fc-925c4bc60233\") " Dec 02 20:14:37 crc kubenswrapper[4796]: I1202 20:14:37.560466 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4295f6-2375-491c-96fc-925c4bc60233-kube-api-access\") pod \"9f4295f6-2375-491c-96fc-925c4bc60233\" (UID: \"9f4295f6-2375-491c-96fc-925c4bc60233\") " Dec 02 20:14:37 crc kubenswrapper[4796]: I1202 20:14:37.564208 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f4295f6-2375-491c-96fc-925c4bc60233-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9f4295f6-2375-491c-96fc-925c4bc60233" (UID: "9f4295f6-2375-491c-96fc-925c4bc60233"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:14:37 crc kubenswrapper[4796]: I1202 20:14:37.603645 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4295f6-2375-491c-96fc-925c4bc60233-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9f4295f6-2375-491c-96fc-925c4bc60233" (UID: "9f4295f6-2375-491c-96fc-925c4bc60233"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:14:37 crc kubenswrapper[4796]: I1202 20:14:37.662367 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f4295f6-2375-491c-96fc-925c4bc60233-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 20:14:37 crc kubenswrapper[4796]: I1202 20:14:37.662403 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f4295f6-2375-491c-96fc-925c4bc60233-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.054186 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9f4295f6-2375-491c-96fc-925c4bc60233","Type":"ContainerDied","Data":"2a52293fc4ce801c3667bab3ae4d498b8786058ea8848c258538a7df650e5886"} Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.054230 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a52293fc4ce801c3667bab3ae4d498b8786058ea8848c258538a7df650e5886" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.054286 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.074547 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 20:14:38 crc kubenswrapper[4796]: E1202 20:14:38.074820 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4295f6-2375-491c-96fc-925c4bc60233" containerName="pruner" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.074843 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4295f6-2375-491c-96fc-925c4bc60233" containerName="pruner" Dec 02 20:14:38 crc kubenswrapper[4796]: E1202 20:14:38.074871 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfed3adf-1892-4da3-8ffc-f4033036a4ca" containerName="collect-profiles" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.074885 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfed3adf-1892-4da3-8ffc-f4033036a4ca" containerName="collect-profiles" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.075009 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4295f6-2375-491c-96fc-925c4bc60233" containerName="pruner" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.075037 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfed3adf-1892-4da3-8ffc-f4033036a4ca" containerName="collect-profiles" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.075660 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.079136 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.079638 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.085881 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.119159 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:38 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:38 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:38 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.119299 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.177988 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc6bb660-c984-4816-8b1f-eba9e7dc8234-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bc6bb660-c984-4816-8b1f-eba9e7dc8234\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.178086 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc6bb660-c984-4816-8b1f-eba9e7dc8234-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bc6bb660-c984-4816-8b1f-eba9e7dc8234\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.281361 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc6bb660-c984-4816-8b1f-eba9e7dc8234-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bc6bb660-c984-4816-8b1f-eba9e7dc8234\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.281489 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc6bb660-c984-4816-8b1f-eba9e7dc8234-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bc6bb660-c984-4816-8b1f-eba9e7dc8234\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.281519 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc6bb660-c984-4816-8b1f-eba9e7dc8234-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bc6bb660-c984-4816-8b1f-eba9e7dc8234\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.306107 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc6bb660-c984-4816-8b1f-eba9e7dc8234-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bc6bb660-c984-4816-8b1f-eba9e7dc8234\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.405800 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.418072 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:14:38 crc kubenswrapper[4796]: I1202 20:14:38.879528 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 20:14:38 crc kubenswrapper[4796]: W1202 20:14:38.934545 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbc6bb660_c984_4816_8b1f_eba9e7dc8234.slice/crio-5c185fee8e658ab78a58f6388075c50e763ca96505fd6d40303167614da3e72c WatchSource:0}: Error finding container 5c185fee8e658ab78a58f6388075c50e763ca96505fd6d40303167614da3e72c: Status 404 returned error can't find the container with id 5c185fee8e658ab78a58f6388075c50e763ca96505fd6d40303167614da3e72c Dec 02 20:14:39 crc kubenswrapper[4796]: I1202 20:14:39.076723 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bc6bb660-c984-4816-8b1f-eba9e7dc8234","Type":"ContainerStarted","Data":"5c185fee8e658ab78a58f6388075c50e763ca96505fd6d40303167614da3e72c"} Dec 02 20:14:39 crc kubenswrapper[4796]: I1202 20:14:39.120036 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:39 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:39 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:39 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:39 crc kubenswrapper[4796]: I1202 20:14:39.120591 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:40 crc kubenswrapper[4796]: I1202 20:14:40.117822 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:40 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:40 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:40 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:40 crc kubenswrapper[4796]: I1202 20:14:40.117910 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:40 crc kubenswrapper[4796]: I1202 20:14:40.175548 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-slnp6" Dec 02 20:14:41 crc kubenswrapper[4796]: I1202 20:14:41.120139 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:41 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:41 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:41 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:41 crc kubenswrapper[4796]: I1202 20:14:41.120304 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:41 crc kubenswrapper[4796]: I1202 20:14:41.540140 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:14:41 crc kubenswrapper[4796]: I1202 20:14:41.554206 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60c1710d-bf66-4687-8ee7-ea828cde5d53-metrics-certs\") pod \"network-metrics-daemon-g7nb5\" (UID: \"60c1710d-bf66-4687-8ee7-ea828cde5d53\") " pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:14:41 crc kubenswrapper[4796]: I1202 20:14:41.826326 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g7nb5" Dec 02 20:14:42 crc kubenswrapper[4796]: I1202 20:14:42.118212 4796 generic.go:334] "Generic (PLEG): container finished" podID="bc6bb660-c984-4816-8b1f-eba9e7dc8234" containerID="54d46e57c1b40ccfa0a84bed905567020067f8569747cd59d0bd70eda2a785d8" exitCode=0 Dec 02 20:14:42 crc kubenswrapper[4796]: I1202 20:14:42.118359 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bc6bb660-c984-4816-8b1f-eba9e7dc8234","Type":"ContainerDied","Data":"54d46e57c1b40ccfa0a84bed905567020067f8569747cd59d0bd70eda2a785d8"} Dec 02 20:14:42 crc kubenswrapper[4796]: I1202 20:14:42.118641 4796 patch_prober.go:28] interesting pod/router-default-5444994796-hn5qp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 20:14:42 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 02 20:14:42 crc kubenswrapper[4796]: [+]process-running ok Dec 02 20:14:42 crc kubenswrapper[4796]: healthz check failed Dec 02 20:14:42 crc kubenswrapper[4796]: I1202 20:14:42.118819 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn5qp" podUID="9775f308-f5d8-4bc7-bfc0-00f065833a55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 20:14:43 crc kubenswrapper[4796]: I1202 20:14:43.118264 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:43 crc kubenswrapper[4796]: I1202 20:14:43.122228 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hn5qp" Dec 02 20:14:44 crc kubenswrapper[4796]: I1202 20:14:44.626564 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-8jxpl" Dec 02 20:14:44 crc kubenswrapper[4796]: I1202 20:14:44.687966 4796 patch_prober.go:28] interesting pod/console-f9d7485db-js54s container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Dec 02 20:14:44 crc kubenswrapper[4796]: I1202 20:14:44.688024 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-js54s" podUID="8e53767d-5052-4220-9645-b8d6d433a7df" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Dec 02 20:14:51 crc kubenswrapper[4796]: I1202 20:14:51.544356 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 20:14:51 crc kubenswrapper[4796]: I1202 20:14:51.602807 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc6bb660-c984-4816-8b1f-eba9e7dc8234-kube-api-access\") pod \"bc6bb660-c984-4816-8b1f-eba9e7dc8234\" (UID: \"bc6bb660-c984-4816-8b1f-eba9e7dc8234\") " Dec 02 20:14:51 crc kubenswrapper[4796]: I1202 20:14:51.602868 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc6bb660-c984-4816-8b1f-eba9e7dc8234-kubelet-dir\") pod \"bc6bb660-c984-4816-8b1f-eba9e7dc8234\" (UID: \"bc6bb660-c984-4816-8b1f-eba9e7dc8234\") " Dec 02 20:14:51 crc kubenswrapper[4796]: I1202 20:14:51.603302 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc6bb660-c984-4816-8b1f-eba9e7dc8234-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bc6bb660-c984-4816-8b1f-eba9e7dc8234" (UID: "bc6bb660-c984-4816-8b1f-eba9e7dc8234"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:14:51 crc kubenswrapper[4796]: I1202 20:14:51.611545 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc6bb660-c984-4816-8b1f-eba9e7dc8234-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bc6bb660-c984-4816-8b1f-eba9e7dc8234" (UID: "bc6bb660-c984-4816-8b1f-eba9e7dc8234"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:14:51 crc kubenswrapper[4796]: I1202 20:14:51.705598 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc6bb660-c984-4816-8b1f-eba9e7dc8234-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 20:14:51 crc kubenswrapper[4796]: I1202 20:14:51.705710 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc6bb660-c984-4816-8b1f-eba9e7dc8234-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 20:14:52 crc kubenswrapper[4796]: I1202 20:14:52.203153 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bc6bb660-c984-4816-8b1f-eba9e7dc8234","Type":"ContainerDied","Data":"5c185fee8e658ab78a58f6388075c50e763ca96505fd6d40303167614da3e72c"} Dec 02 20:14:52 crc kubenswrapper[4796]: I1202 20:14:52.203299 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c185fee8e658ab78a58f6388075c50e763ca96505fd6d40303167614da3e72c" Dec 02 20:14:52 crc kubenswrapper[4796]: I1202 20:14:52.203221 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 20:14:52 crc kubenswrapper[4796]: I1202 20:14:52.826873 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:14:54 crc kubenswrapper[4796]: I1202 20:14:54.686800 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:54 crc kubenswrapper[4796]: I1202 20:14:54.691069 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:14:55 crc kubenswrapper[4796]: I1202 20:14:55.189797 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:14:55 crc kubenswrapper[4796]: I1202 20:14:55.189895 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.143665 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z"] Dec 02 20:15:00 crc kubenswrapper[4796]: E1202 20:15:00.145405 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc6bb660-c984-4816-8b1f-eba9e7dc8234" containerName="pruner" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.145435 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6bb660-c984-4816-8b1f-eba9e7dc8234" containerName="pruner" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.145755 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc6bb660-c984-4816-8b1f-eba9e7dc8234" containerName="pruner" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.146805 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.153230 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.154756 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.171638 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z"] Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.255509 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72ztd\" (UniqueName: \"kubernetes.io/projected/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-kube-api-access-72ztd\") pod \"collect-profiles-29411775-bcm6z\" (UID: \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.255607 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-config-volume\") pod \"collect-profiles-29411775-bcm6z\" (UID: \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.255648 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-secret-volume\") pod \"collect-profiles-29411775-bcm6z\" (UID: \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.357345 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72ztd\" (UniqueName: \"kubernetes.io/projected/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-kube-api-access-72ztd\") pod \"collect-profiles-29411775-bcm6z\" (UID: \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.357445 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-config-volume\") pod \"collect-profiles-29411775-bcm6z\" (UID: \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.357484 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-secret-volume\") pod \"collect-profiles-29411775-bcm6z\" (UID: \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.358584 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-config-volume\") pod \"collect-profiles-29411775-bcm6z\" (UID: \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.363853 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-secret-volume\") pod \"collect-profiles-29411775-bcm6z\" (UID: \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.379133 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72ztd\" (UniqueName: \"kubernetes.io/projected/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-kube-api-access-72ztd\") pod \"collect-profiles-29411775-bcm6z\" (UID: \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:00 crc kubenswrapper[4796]: I1202 20:15:00.471802 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:03 crc kubenswrapper[4796]: I1202 20:15:03.778035 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 20:15:03 crc kubenswrapper[4796]: E1202 20:15:03.864035 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 02 20:15:03 crc kubenswrapper[4796]: E1202 20:15:03.864604 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdk6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-65fxg_openshift-marketplace(88577409-7021-4e24-852b-4d8f6d0c512a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 20:15:03 crc kubenswrapper[4796]: E1202 20:15:03.866082 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-65fxg" podUID="88577409-7021-4e24-852b-4d8f6d0c512a" Dec 02 20:15:04 crc kubenswrapper[4796]: I1202 20:15:04.830933 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqd2f" Dec 02 20:15:07 crc kubenswrapper[4796]: E1202 20:15:07.445995 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-65fxg" podUID="88577409-7021-4e24-852b-4d8f6d0c512a" Dec 02 20:15:07 crc kubenswrapper[4796]: E1202 20:15:07.515046 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 02 20:15:07 crc kubenswrapper[4796]: E1202 20:15:07.515233 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7www5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4ks8z_openshift-marketplace(b6bddb9c-450c-4804-a59f-b1b290a74b9a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 20:15:07 crc kubenswrapper[4796]: E1202 20:15:07.516492 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4ks8z" podUID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" Dec 02 20:15:09 crc kubenswrapper[4796]: E1202 20:15:09.191417 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4ks8z" podUID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" Dec 02 20:15:09 crc kubenswrapper[4796]: E1202 20:15:09.288070 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 02 20:15:09 crc kubenswrapper[4796]: E1202 20:15:09.288297 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6q4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5wq72_openshift-marketplace(22a52028-b443-4287-80e1-dfcffb2ba07e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 20:15:09 crc kubenswrapper[4796]: E1202 20:15:09.289686 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5wq72" podUID="22a52028-b443-4287-80e1-dfcffb2ba07e" Dec 02 20:15:09 crc kubenswrapper[4796]: E1202 20:15:09.297146 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 02 20:15:09 crc kubenswrapper[4796]: E1202 20:15:09.297495 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cc62z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-26bqf_openshift-marketplace(2f262dee-3028-4aa8-8ab3-8e4777368da0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 20:15:09 crc kubenswrapper[4796]: E1202 20:15:09.298782 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-26bqf" podUID="2f262dee-3028-4aa8-8ab3-8e4777368da0" Dec 02 20:15:09 crc kubenswrapper[4796]: E1202 20:15:09.320474 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-26bqf" podUID="2f262dee-3028-4aa8-8ab3-8e4777368da0" Dec 02 20:15:09 crc kubenswrapper[4796]: E1202 20:15:09.321163 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5wq72" podUID="22a52028-b443-4287-80e1-dfcffb2ba07e" Dec 02 20:15:09 crc kubenswrapper[4796]: E1202 20:15:09.409835 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 02 20:15:09 crc kubenswrapper[4796]: E1202 20:15:09.410536 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cdlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kgxxm_openshift-marketplace(839017d2-1f6e-4b5b-93ec-80175eabe5f8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 20:15:09 crc kubenswrapper[4796]: E1202 20:15:09.412051 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kgxxm" podUID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" Dec 02 20:15:09 crc kubenswrapper[4796]: I1202 20:15:09.705979 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-g7nb5"] Dec 02 20:15:09 crc kubenswrapper[4796]: I1202 20:15:09.788049 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z"] Dec 02 20:15:09 crc kubenswrapper[4796]: W1202 20:15:09.795973 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod224c81f1_cd3a_4aa4_bf5a_bd6d33b80513.slice/crio-89ab06afcb839c22b9264cefde6aabfce66efa57a958a9e5560619dbfb17ac10 WatchSource:0}: Error finding container 89ab06afcb839c22b9264cefde6aabfce66efa57a958a9e5560619dbfb17ac10: Status 404 returned error can't find the container with id 89ab06afcb839c22b9264cefde6aabfce66efa57a958a9e5560619dbfb17ac10 Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.326786 4796 generic.go:334] "Generic (PLEG): container finished" podID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" containerID="fba7b6612b2008bc045d6351ffd5d22378832f240c59b0b01ea8998deb7011b5" exitCode=0 Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.326893 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cz22v" event={"ID":"ffd16a4b-7b78-4954-8ea4-317fdfcedb55","Type":"ContainerDied","Data":"fba7b6612b2008bc045d6351ffd5d22378832f240c59b0b01ea8998deb7011b5"} Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.337543 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" event={"ID":"60c1710d-bf66-4687-8ee7-ea828cde5d53","Type":"ContainerStarted","Data":"2d69eb30a5f6698beae95fa924b7acb306f91b93c113ca47c7b983050716496f"} Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.337615 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" event={"ID":"60c1710d-bf66-4687-8ee7-ea828cde5d53","Type":"ContainerStarted","Data":"e2a1c023bcc3acffaca0b168db3d409b67df4a38317d3ca48fbf2012a37d555a"} Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.337633 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-g7nb5" event={"ID":"60c1710d-bf66-4687-8ee7-ea828cde5d53","Type":"ContainerStarted","Data":"d4d81590fe22f1fd4014db1dadbce174a677a2dbbea3faedf88a5dd2ef32f767"} Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.343297 4796 generic.go:334] "Generic (PLEG): container finished" podID="5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" containerID="72cc768e36e1a448eba20aba7f9a78a8b8cdf3a06f68b4a51fbac99925bc398b" exitCode=0 Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.343391 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n27mr" event={"ID":"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8","Type":"ContainerDied","Data":"72cc768e36e1a448eba20aba7f9a78a8b8cdf3a06f68b4a51fbac99925bc398b"} Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.348990 4796 generic.go:334] "Generic (PLEG): container finished" podID="224c81f1-cd3a-4aa4-bf5a-bd6d33b80513" containerID="5704f8dc60b37aeb4c59dadc4c03704b2115069f1aaf483a2ae354141eea0c29" exitCode=0 Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.349164 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" event={"ID":"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513","Type":"ContainerDied","Data":"5704f8dc60b37aeb4c59dadc4c03704b2115069f1aaf483a2ae354141eea0c29"} Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.349208 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" event={"ID":"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513","Type":"ContainerStarted","Data":"89ab06afcb839c22b9264cefde6aabfce66efa57a958a9e5560619dbfb17ac10"} Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.353003 4796 generic.go:334] "Generic (PLEG): container finished" podID="b44fde09-6008-44d4-999b-3ddf1c198ff7" containerID="984360efbe410a68843ed6f9f366475df175cfbc057a4d09c588eff426389b85" exitCode=0 Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.353390 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgfr4" event={"ID":"b44fde09-6008-44d4-999b-3ddf1c198ff7","Type":"ContainerDied","Data":"984360efbe410a68843ed6f9f366475df175cfbc057a4d09c588eff426389b85"} Dec 02 20:15:10 crc kubenswrapper[4796]: E1202 20:15:10.356162 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kgxxm" podUID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" Dec 02 20:15:10 crc kubenswrapper[4796]: I1202 20:15:10.390736 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-g7nb5" podStartSLOduration=171.390709073 podStartE2EDuration="2m51.390709073s" podCreationTimestamp="2025-12-02 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:15:10.390023427 +0000 UTC m=+193.393398961" watchObservedRunningTime="2025-12-02 20:15:10.390709073 +0000 UTC m=+193.394084617" Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.367742 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgfr4" event={"ID":"b44fde09-6008-44d4-999b-3ddf1c198ff7","Type":"ContainerStarted","Data":"176c23a98e3747b50f51ba5ce396fcb0fa562d0bb44ea50d38f4761140e6842e"} Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.381420 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cz22v" event={"ID":"ffd16a4b-7b78-4954-8ea4-317fdfcedb55","Type":"ContainerStarted","Data":"38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e"} Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.388090 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n27mr" event={"ID":"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8","Type":"ContainerStarted","Data":"a3ae620d611ffb04c0cf54fbae2b42522af461907187e0604771b703214f50d8"} Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.395003 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pgfr4" podStartSLOduration=2.470599502 podStartE2EDuration="39.394973965s" podCreationTimestamp="2025-12-02 20:14:32 +0000 UTC" firstStartedPulling="2025-12-02 20:14:33.916549779 +0000 UTC m=+156.919925313" lastFinishedPulling="2025-12-02 20:15:10.840924242 +0000 UTC m=+193.844299776" observedRunningTime="2025-12-02 20:15:11.390532887 +0000 UTC m=+194.393908421" watchObservedRunningTime="2025-12-02 20:15:11.394973965 +0000 UTC m=+194.398349499" Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.414611 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cz22v" podStartSLOduration=3.424110115 podStartE2EDuration="39.4145882s" podCreationTimestamp="2025-12-02 20:14:32 +0000 UTC" firstStartedPulling="2025-12-02 20:14:34.918977196 +0000 UTC m=+157.922352730" lastFinishedPulling="2025-12-02 20:15:10.909455281 +0000 UTC m=+193.912830815" observedRunningTime="2025-12-02 20:15:11.408790939 +0000 UTC m=+194.412166473" watchObservedRunningTime="2025-12-02 20:15:11.4145882 +0000 UTC m=+194.417963734" Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.433523 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n27mr" podStartSLOduration=4.402152415 podStartE2EDuration="42.433500497s" podCreationTimestamp="2025-12-02 20:14:29 +0000 UTC" firstStartedPulling="2025-12-02 20:14:32.76864905 +0000 UTC m=+155.772024584" lastFinishedPulling="2025-12-02 20:15:10.799997132 +0000 UTC m=+193.803372666" observedRunningTime="2025-12-02 20:15:11.429453769 +0000 UTC m=+194.432829303" watchObservedRunningTime="2025-12-02 20:15:11.433500497 +0000 UTC m=+194.436876031" Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.782792 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.941722 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-secret-volume\") pod \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\" (UID: \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\") " Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.941872 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-config-volume\") pod \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\" (UID: \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\") " Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.941897 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72ztd\" (UniqueName: \"kubernetes.io/projected/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-kube-api-access-72ztd\") pod \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\" (UID: \"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513\") " Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.942915 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-config-volume" (OuterVolumeSpecName: "config-volume") pod "224c81f1-cd3a-4aa4-bf5a-bd6d33b80513" (UID: "224c81f1-cd3a-4aa4-bf5a-bd6d33b80513"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.953425 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "224c81f1-cd3a-4aa4-bf5a-bd6d33b80513" (UID: "224c81f1-cd3a-4aa4-bf5a-bd6d33b80513"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:15:11 crc kubenswrapper[4796]: I1202 20:15:11.953494 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-kube-api-access-72ztd" (OuterVolumeSpecName: "kube-api-access-72ztd") pod "224c81f1-cd3a-4aa4-bf5a-bd6d33b80513" (UID: "224c81f1-cd3a-4aa4-bf5a-bd6d33b80513"). InnerVolumeSpecName "kube-api-access-72ztd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:15:12 crc kubenswrapper[4796]: I1202 20:15:12.043878 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:12 crc kubenswrapper[4796]: I1202 20:15:12.043926 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:12 crc kubenswrapper[4796]: I1202 20:15:12.043940 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72ztd\" (UniqueName: \"kubernetes.io/projected/224c81f1-cd3a-4aa4-bf5a-bd6d33b80513-kube-api-access-72ztd\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:12 crc kubenswrapper[4796]: I1202 20:15:12.396994 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" event={"ID":"224c81f1-cd3a-4aa4-bf5a-bd6d33b80513","Type":"ContainerDied","Data":"89ab06afcb839c22b9264cefde6aabfce66efa57a958a9e5560619dbfb17ac10"} Dec 02 20:15:12 crc kubenswrapper[4796]: I1202 20:15:12.397057 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89ab06afcb839c22b9264cefde6aabfce66efa57a958a9e5560619dbfb17ac10" Dec 02 20:15:12 crc kubenswrapper[4796]: I1202 20:15:12.397331 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411775-bcm6z" Dec 02 20:15:12 crc kubenswrapper[4796]: I1202 20:15:12.651349 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:15:12 crc kubenswrapper[4796]: I1202 20:15:12.651408 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:15:12 crc kubenswrapper[4796]: I1202 20:15:12.718696 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.323163 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.323326 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.472802 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 20:15:13 crc kubenswrapper[4796]: E1202 20:15:13.473414 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224c81f1-cd3a-4aa4-bf5a-bd6d33b80513" containerName="collect-profiles" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.473428 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="224c81f1-cd3a-4aa4-bf5a-bd6d33b80513" containerName="collect-profiles" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.473533 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="224c81f1-cd3a-4aa4-bf5a-bd6d33b80513" containerName="collect-profiles" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.473950 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.478199 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.478751 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.485816 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.566304 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/445063a3-8462-4b24-b139-7428e579e4f5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"445063a3-8462-4b24-b139-7428e579e4f5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.566393 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/445063a3-8462-4b24-b139-7428e579e4f5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"445063a3-8462-4b24-b139-7428e579e4f5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.668051 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/445063a3-8462-4b24-b139-7428e579e4f5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"445063a3-8462-4b24-b139-7428e579e4f5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.668564 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/445063a3-8462-4b24-b139-7428e579e4f5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"445063a3-8462-4b24-b139-7428e579e4f5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.668755 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/445063a3-8462-4b24-b139-7428e579e4f5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"445063a3-8462-4b24-b139-7428e579e4f5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.687180 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/445063a3-8462-4b24-b139-7428e579e4f5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"445063a3-8462-4b24-b139-7428e579e4f5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 20:15:13 crc kubenswrapper[4796]: I1202 20:15:13.789583 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 20:15:14 crc kubenswrapper[4796]: I1202 20:15:14.186851 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 20:15:14 crc kubenswrapper[4796]: I1202 20:15:14.373348 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cz22v" podUID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" containerName="registry-server" probeResult="failure" output=< Dec 02 20:15:14 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 02 20:15:14 crc kubenswrapper[4796]: > Dec 02 20:15:14 crc kubenswrapper[4796]: I1202 20:15:14.409634 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"445063a3-8462-4b24-b139-7428e579e4f5","Type":"ContainerStarted","Data":"8b95fa5b97d42b6e096662359846fc7399a526b17a1e8a95fe9b6ec4649967bd"} Dec 02 20:15:15 crc kubenswrapper[4796]: I1202 20:15:15.420046 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"445063a3-8462-4b24-b139-7428e579e4f5","Type":"ContainerStarted","Data":"b34118d43b204fb0d559cb76d61d5157c1aa11297e8eead7357ffec5e4584abb"} Dec 02 20:15:15 crc kubenswrapper[4796]: I1202 20:15:15.439455 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.439413774 podStartE2EDuration="2.439413774s" podCreationTimestamp="2025-12-02 20:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:15:15.43721266 +0000 UTC m=+198.440588224" watchObservedRunningTime="2025-12-02 20:15:15.439413774 +0000 UTC m=+198.442789308" Dec 02 20:15:16 crc kubenswrapper[4796]: I1202 20:15:16.429244 4796 generic.go:334] "Generic (PLEG): container finished" podID="445063a3-8462-4b24-b139-7428e579e4f5" containerID="b34118d43b204fb0d559cb76d61d5157c1aa11297e8eead7357ffec5e4584abb" exitCode=0 Dec 02 20:15:16 crc kubenswrapper[4796]: I1202 20:15:16.429445 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"445063a3-8462-4b24-b139-7428e579e4f5","Type":"ContainerDied","Data":"b34118d43b204fb0d559cb76d61d5157c1aa11297e8eead7357ffec5e4584abb"} Dec 02 20:15:17 crc kubenswrapper[4796]: I1202 20:15:17.733479 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 20:15:17 crc kubenswrapper[4796]: I1202 20:15:17.841233 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/445063a3-8462-4b24-b139-7428e579e4f5-kubelet-dir\") pod \"445063a3-8462-4b24-b139-7428e579e4f5\" (UID: \"445063a3-8462-4b24-b139-7428e579e4f5\") " Dec 02 20:15:17 crc kubenswrapper[4796]: I1202 20:15:17.841381 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/445063a3-8462-4b24-b139-7428e579e4f5-kube-api-access\") pod \"445063a3-8462-4b24-b139-7428e579e4f5\" (UID: \"445063a3-8462-4b24-b139-7428e579e4f5\") " Dec 02 20:15:17 crc kubenswrapper[4796]: I1202 20:15:17.841416 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/445063a3-8462-4b24-b139-7428e579e4f5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "445063a3-8462-4b24-b139-7428e579e4f5" (UID: "445063a3-8462-4b24-b139-7428e579e4f5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:15:17 crc kubenswrapper[4796]: I1202 20:15:17.841821 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/445063a3-8462-4b24-b139-7428e579e4f5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:17 crc kubenswrapper[4796]: I1202 20:15:17.854034 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/445063a3-8462-4b24-b139-7428e579e4f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "445063a3-8462-4b24-b139-7428e579e4f5" (UID: "445063a3-8462-4b24-b139-7428e579e4f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:15:17 crc kubenswrapper[4796]: I1202 20:15:17.943608 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/445063a3-8462-4b24-b139-7428e579e4f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:18 crc kubenswrapper[4796]: I1202 20:15:18.450155 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"445063a3-8462-4b24-b139-7428e579e4f5","Type":"ContainerDied","Data":"8b95fa5b97d42b6e096662359846fc7399a526b17a1e8a95fe9b6ec4649967bd"} Dec 02 20:15:18 crc kubenswrapper[4796]: I1202 20:15:18.450631 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b95fa5b97d42b6e096662359846fc7399a526b17a1e8a95fe9b6ec4649967bd" Dec 02 20:15:18 crc kubenswrapper[4796]: I1202 20:15:18.450229 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.470652 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 20:15:19 crc kubenswrapper[4796]: E1202 20:15:19.470920 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445063a3-8462-4b24-b139-7428e579e4f5" containerName="pruner" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.470937 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="445063a3-8462-4b24-b139-7428e579e4f5" containerName="pruner" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.471056 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="445063a3-8462-4b24-b139-7428e579e4f5" containerName="pruner" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.471481 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.474018 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.474187 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.478648 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.478767 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-kube-api-access\") pod \"installer-9-crc\" (UID: \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.478833 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-var-lock\") pod \"installer-9-crc\" (UID: \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.496659 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.579822 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.579917 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-kube-api-access\") pod \"installer-9-crc\" (UID: \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.579961 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-var-lock\") pod \"installer-9-crc\" (UID: \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.579986 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.580149 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-var-lock\") pod \"installer-9-crc\" (UID: \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.601311 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-kube-api-access\") pod \"installer-9-crc\" (UID: \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:15:19 crc kubenswrapper[4796]: I1202 20:15:19.798941 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:15:20 crc kubenswrapper[4796]: I1202 20:15:20.091565 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:15:20 crc kubenswrapper[4796]: I1202 20:15:20.091637 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:15:20 crc kubenswrapper[4796]: I1202 20:15:20.157445 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:15:20 crc kubenswrapper[4796]: I1202 20:15:20.216429 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 20:15:20 crc kubenswrapper[4796]: W1202 20:15:20.223331 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9a7746b6_206d_4c38_8f34_ba7b3f9e6e2d.slice/crio-ea078de209ec58fd5c53e3497b4021ddea37fcc5d8d0e41046ee0c0970b78ec1 WatchSource:0}: Error finding container ea078de209ec58fd5c53e3497b4021ddea37fcc5d8d0e41046ee0c0970b78ec1: Status 404 returned error can't find the container with id ea078de209ec58fd5c53e3497b4021ddea37fcc5d8d0e41046ee0c0970b78ec1 Dec 02 20:15:20 crc kubenswrapper[4796]: I1202 20:15:20.464817 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d","Type":"ContainerStarted","Data":"ea078de209ec58fd5c53e3497b4021ddea37fcc5d8d0e41046ee0c0970b78ec1"} Dec 02 20:15:20 crc kubenswrapper[4796]: I1202 20:15:20.508674 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:15:21 crc kubenswrapper[4796]: I1202 20:15:21.473234 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d","Type":"ContainerStarted","Data":"b742a14c8082df2c05f23063ff3f327a062cb167597af4143deb75c6d1f68012"} Dec 02 20:15:21 crc kubenswrapper[4796]: I1202 20:15:21.476953 4796 generic.go:334] "Generic (PLEG): container finished" podID="88577409-7021-4e24-852b-4d8f6d0c512a" containerID="170e856145789db1378cbfc4831b3abc9a73c8181905525153a014934fa61551" exitCode=0 Dec 02 20:15:21 crc kubenswrapper[4796]: I1202 20:15:21.477050 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fxg" event={"ID":"88577409-7021-4e24-852b-4d8f6d0c512a","Type":"ContainerDied","Data":"170e856145789db1378cbfc4831b3abc9a73c8181905525153a014934fa61551"} Dec 02 20:15:21 crc kubenswrapper[4796]: I1202 20:15:21.492447 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.492421099 podStartE2EDuration="2.492421099s" podCreationTimestamp="2025-12-02 20:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:15:21.491809955 +0000 UTC m=+204.495185489" watchObservedRunningTime="2025-12-02 20:15:21.492421099 +0000 UTC m=+204.495796633" Dec 02 20:15:22 crc kubenswrapper[4796]: I1202 20:15:22.486219 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f262dee-3028-4aa8-8ab3-8e4777368da0" containerID="23d0491c8e75b23af7ee85a96bfcf43451beb7008c5c341a565038f4a9a94d9d" exitCode=0 Dec 02 20:15:22 crc kubenswrapper[4796]: I1202 20:15:22.486314 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26bqf" event={"ID":"2f262dee-3028-4aa8-8ab3-8e4777368da0","Type":"ContainerDied","Data":"23d0491c8e75b23af7ee85a96bfcf43451beb7008c5c341a565038f4a9a94d9d"} Dec 02 20:15:22 crc kubenswrapper[4796]: I1202 20:15:22.699026 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:15:23 crc kubenswrapper[4796]: I1202 20:15:23.367325 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:15:23 crc kubenswrapper[4796]: I1202 20:15:23.418798 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:15:23 crc kubenswrapper[4796]: I1202 20:15:23.506976 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26bqf" event={"ID":"2f262dee-3028-4aa8-8ab3-8e4777368da0","Type":"ContainerStarted","Data":"2e7f188102737030e033828789dac0e145312ee1ce94a37f8f0a5a024f3a5aa8"} Dec 02 20:15:23 crc kubenswrapper[4796]: I1202 20:15:23.510531 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ks8z" event={"ID":"b6bddb9c-450c-4804-a59f-b1b290a74b9a","Type":"ContainerStarted","Data":"51a59cf56d9904e9dbfaba97c120076174f2df09f67407b8d6f64438f0952223"} Dec 02 20:15:23 crc kubenswrapper[4796]: I1202 20:15:23.514058 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fxg" event={"ID":"88577409-7021-4e24-852b-4d8f6d0c512a","Type":"ContainerStarted","Data":"a3c8c059b8978cc8025cac47da70d63974c31057e8558f6a6849cacd59debffe"} Dec 02 20:15:23 crc kubenswrapper[4796]: I1202 20:15:23.537826 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-26bqf" podStartSLOduration=3.168319389 podStartE2EDuration="52.537420048s" podCreationTimestamp="2025-12-02 20:14:31 +0000 UTC" firstStartedPulling="2025-12-02 20:14:33.84018469 +0000 UTC m=+156.843560224" lastFinishedPulling="2025-12-02 20:15:23.209285339 +0000 UTC m=+206.212660883" observedRunningTime="2025-12-02 20:15:23.530787202 +0000 UTC m=+206.534162736" watchObservedRunningTime="2025-12-02 20:15:23.537420048 +0000 UTC m=+206.540795582" Dec 02 20:15:23 crc kubenswrapper[4796]: I1202 20:15:23.555308 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-65fxg" podStartSLOduration=3.181945651 podStartE2EDuration="53.555284715s" podCreationTimestamp="2025-12-02 20:14:30 +0000 UTC" firstStartedPulling="2025-12-02 20:14:32.765070664 +0000 UTC m=+155.768446198" lastFinishedPulling="2025-12-02 20:15:23.138409728 +0000 UTC m=+206.141785262" observedRunningTime="2025-12-02 20:15:23.554537045 +0000 UTC m=+206.557912579" watchObservedRunningTime="2025-12-02 20:15:23.555284715 +0000 UTC m=+206.558660249" Dec 02 20:15:24 crc kubenswrapper[4796]: I1202 20:15:24.523138 4796 generic.go:334] "Generic (PLEG): container finished" podID="22a52028-b443-4287-80e1-dfcffb2ba07e" containerID="570304f3bf069b1960dd9a1a828a98886201f668c89ec92e15a68025cf60d46e" exitCode=0 Dec 02 20:15:24 crc kubenswrapper[4796]: I1202 20:15:24.523197 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wq72" event={"ID":"22a52028-b443-4287-80e1-dfcffb2ba07e","Type":"ContainerDied","Data":"570304f3bf069b1960dd9a1a828a98886201f668c89ec92e15a68025cf60d46e"} Dec 02 20:15:24 crc kubenswrapper[4796]: I1202 20:15:24.526991 4796 generic.go:334] "Generic (PLEG): container finished" podID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" containerID="51a59cf56d9904e9dbfaba97c120076174f2df09f67407b8d6f64438f0952223" exitCode=0 Dec 02 20:15:24 crc kubenswrapper[4796]: I1202 20:15:24.527080 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ks8z" event={"ID":"b6bddb9c-450c-4804-a59f-b1b290a74b9a","Type":"ContainerDied","Data":"51a59cf56d9904e9dbfaba97c120076174f2df09f67407b8d6f64438f0952223"} Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.189871 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.190375 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.190451 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.191313 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08"} pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.191463 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" containerID="cri-o://0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08" gracePeriod=600 Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.302891 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgfr4"] Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.303712 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pgfr4" podUID="b44fde09-6008-44d4-999b-3ddf1c198ff7" containerName="registry-server" containerID="cri-o://176c23a98e3747b50f51ba5ce396fcb0fa562d0bb44ea50d38f4761140e6842e" gracePeriod=2 Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.535445 4796 generic.go:334] "Generic (PLEG): container finished" podID="b44fde09-6008-44d4-999b-3ddf1c198ff7" containerID="176c23a98e3747b50f51ba5ce396fcb0fa562d0bb44ea50d38f4761140e6842e" exitCode=0 Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.535558 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgfr4" event={"ID":"b44fde09-6008-44d4-999b-3ddf1c198ff7","Type":"ContainerDied","Data":"176c23a98e3747b50f51ba5ce396fcb0fa562d0bb44ea50d38f4761140e6842e"} Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.540183 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wq72" event={"ID":"22a52028-b443-4287-80e1-dfcffb2ba07e","Type":"ContainerStarted","Data":"b1f78422ededbb686ec95ff9c6ac1bced379acc2fb1d2940322f5d73ed37336e"} Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.545893 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ks8z" event={"ID":"b6bddb9c-450c-4804-a59f-b1b290a74b9a","Type":"ContainerStarted","Data":"f6ff36cf27c8ad9268c930f8bba2e853a982d6fd5781e25a1a99792d8f102249"} Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.548786 4796 generic.go:334] "Generic (PLEG): container finished" podID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerID="0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08" exitCode=0 Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.548846 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerDied","Data":"0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08"} Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.568056 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5wq72" podStartSLOduration=4.419735707 podStartE2EDuration="56.568022175s" podCreationTimestamp="2025-12-02 20:14:29 +0000 UTC" firstStartedPulling="2025-12-02 20:14:32.796241998 +0000 UTC m=+155.799617532" lastFinishedPulling="2025-12-02 20:15:24.944528466 +0000 UTC m=+207.947904000" observedRunningTime="2025-12-02 20:15:25.564969579 +0000 UTC m=+208.568345113" watchObservedRunningTime="2025-12-02 20:15:25.568022175 +0000 UTC m=+208.571397709" Dec 02 20:15:25 crc kubenswrapper[4796]: I1202 20:15:25.584875 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4ks8z" podStartSLOduration=2.635132 podStartE2EDuration="52.584853036s" podCreationTimestamp="2025-12-02 20:14:33 +0000 UTC" firstStartedPulling="2025-12-02 20:14:34.932125024 +0000 UTC m=+157.935500558" lastFinishedPulling="2025-12-02 20:15:24.88184606 +0000 UTC m=+207.885221594" observedRunningTime="2025-12-02 20:15:25.581240816 +0000 UTC m=+208.584616350" watchObservedRunningTime="2025-12-02 20:15:25.584853036 +0000 UTC m=+208.588228570" Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.314460 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.481389 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b44fde09-6008-44d4-999b-3ddf1c198ff7-utilities\") pod \"b44fde09-6008-44d4-999b-3ddf1c198ff7\" (UID: \"b44fde09-6008-44d4-999b-3ddf1c198ff7\") " Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.481525 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tmr5\" (UniqueName: \"kubernetes.io/projected/b44fde09-6008-44d4-999b-3ddf1c198ff7-kube-api-access-4tmr5\") pod \"b44fde09-6008-44d4-999b-3ddf1c198ff7\" (UID: \"b44fde09-6008-44d4-999b-3ddf1c198ff7\") " Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.481690 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b44fde09-6008-44d4-999b-3ddf1c198ff7-catalog-content\") pod \"b44fde09-6008-44d4-999b-3ddf1c198ff7\" (UID: \"b44fde09-6008-44d4-999b-3ddf1c198ff7\") " Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.482714 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b44fde09-6008-44d4-999b-3ddf1c198ff7-utilities" (OuterVolumeSpecName: "utilities") pod "b44fde09-6008-44d4-999b-3ddf1c198ff7" (UID: "b44fde09-6008-44d4-999b-3ddf1c198ff7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.488714 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44fde09-6008-44d4-999b-3ddf1c198ff7-kube-api-access-4tmr5" (OuterVolumeSpecName: "kube-api-access-4tmr5") pod "b44fde09-6008-44d4-999b-3ddf1c198ff7" (UID: "b44fde09-6008-44d4-999b-3ddf1c198ff7"). InnerVolumeSpecName "kube-api-access-4tmr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.502730 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b44fde09-6008-44d4-999b-3ddf1c198ff7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b44fde09-6008-44d4-999b-3ddf1c198ff7" (UID: "b44fde09-6008-44d4-999b-3ddf1c198ff7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.558672 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerStarted","Data":"4edf350718db085522ef32b4b6bc7016bd54791890e5d578b27f86be8c74f767"} Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.564642 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgfr4" event={"ID":"b44fde09-6008-44d4-999b-3ddf1c198ff7","Type":"ContainerDied","Data":"c06fbb962880b5fec9d3f5dd7d7a5ea98ec9da1236e97f01804c453f85a2485c"} Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.564734 4796 scope.go:117] "RemoveContainer" containerID="176c23a98e3747b50f51ba5ce396fcb0fa562d0bb44ea50d38f4761140e6842e" Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.564963 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgfr4" Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.583767 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tmr5\" (UniqueName: \"kubernetes.io/projected/b44fde09-6008-44d4-999b-3ddf1c198ff7-kube-api-access-4tmr5\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.583813 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b44fde09-6008-44d4-999b-3ddf1c198ff7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.583830 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b44fde09-6008-44d4-999b-3ddf1c198ff7-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.594337 4796 scope.go:117] "RemoveContainer" containerID="984360efbe410a68843ed6f9f366475df175cfbc057a4d09c588eff426389b85" Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.612739 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgfr4"] Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.617190 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgfr4"] Dec 02 20:15:26 crc kubenswrapper[4796]: I1202 20:15:26.619628 4796 scope.go:117] "RemoveContainer" containerID="520b245c86724a7eb7bdb4ab507fd8f0110b72a970ee833e4817feaff23b64d3" Dec 02 20:15:27 crc kubenswrapper[4796]: I1202 20:15:27.277647 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b44fde09-6008-44d4-999b-3ddf1c198ff7" path="/var/lib/kubelet/pods/b44fde09-6008-44d4-999b-3ddf1c198ff7/volumes" Dec 02 20:15:28 crc kubenswrapper[4796]: I1202 20:15:28.581928 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgxxm" event={"ID":"839017d2-1f6e-4b5b-93ec-80175eabe5f8","Type":"ContainerStarted","Data":"f184802d349a8176c10ab25274fcd801db52d37a21ab911508debf6d5c2ebb1b"} Dec 02 20:15:29 crc kubenswrapper[4796]: I1202 20:15:29.591674 4796 generic.go:334] "Generic (PLEG): container finished" podID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" containerID="f184802d349a8176c10ab25274fcd801db52d37a21ab911508debf6d5c2ebb1b" exitCode=0 Dec 02 20:15:29 crc kubenswrapper[4796]: I1202 20:15:29.591727 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgxxm" event={"ID":"839017d2-1f6e-4b5b-93ec-80175eabe5f8","Type":"ContainerDied","Data":"f184802d349a8176c10ab25274fcd801db52d37a21ab911508debf6d5c2ebb1b"} Dec 02 20:15:30 crc kubenswrapper[4796]: I1202 20:15:30.316202 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:15:30 crc kubenswrapper[4796]: I1202 20:15:30.316863 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:15:30 crc kubenswrapper[4796]: I1202 20:15:30.381997 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:15:30 crc kubenswrapper[4796]: I1202 20:15:30.497014 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:15:30 crc kubenswrapper[4796]: I1202 20:15:30.497100 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:15:30 crc kubenswrapper[4796]: I1202 20:15:30.535705 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:15:30 crc kubenswrapper[4796]: I1202 20:15:30.637163 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:15:30 crc kubenswrapper[4796]: I1202 20:15:30.637888 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:15:31 crc kubenswrapper[4796]: I1202 20:15:31.608372 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgxxm" event={"ID":"839017d2-1f6e-4b5b-93ec-80175eabe5f8","Type":"ContainerStarted","Data":"1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a"} Dec 02 20:15:31 crc kubenswrapper[4796]: I1202 20:15:31.652127 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kgxxm" podStartSLOduration=3.5111430390000002 podStartE2EDuration="1m1.652095846s" podCreationTimestamp="2025-12-02 20:14:30 +0000 UTC" firstStartedPulling="2025-12-02 20:14:32.746878613 +0000 UTC m=+155.750254147" lastFinishedPulling="2025-12-02 20:15:30.88783141 +0000 UTC m=+213.891206954" observedRunningTime="2025-12-02 20:15:31.647818629 +0000 UTC m=+214.651194203" watchObservedRunningTime="2025-12-02 20:15:31.652095846 +0000 UTC m=+214.655471400" Dec 02 20:15:32 crc kubenswrapper[4796]: I1202 20:15:32.335104 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:15:32 crc kubenswrapper[4796]: I1202 20:15:32.335211 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:15:32 crc kubenswrapper[4796]: I1202 20:15:32.395736 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:15:32 crc kubenswrapper[4796]: I1202 20:15:32.653735 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:15:33 crc kubenswrapper[4796]: I1202 20:15:33.663419 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:15:33 crc kubenswrapper[4796]: I1202 20:15:33.663776 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:15:33 crc kubenswrapper[4796]: I1202 20:15:33.706404 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-65fxg"] Dec 02 20:15:33 crc kubenswrapper[4796]: I1202 20:15:33.706943 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-65fxg" podUID="88577409-7021-4e24-852b-4d8f6d0c512a" containerName="registry-server" containerID="cri-o://a3c8c059b8978cc8025cac47da70d63974c31057e8558f6a6849cacd59debffe" gracePeriod=2 Dec 02 20:15:33 crc kubenswrapper[4796]: I1202 20:15:33.737084 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:15:34 crc kubenswrapper[4796]: I1202 20:15:34.634690 4796 generic.go:334] "Generic (PLEG): container finished" podID="88577409-7021-4e24-852b-4d8f6d0c512a" containerID="a3c8c059b8978cc8025cac47da70d63974c31057e8558f6a6849cacd59debffe" exitCode=0 Dec 02 20:15:34 crc kubenswrapper[4796]: I1202 20:15:34.634808 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fxg" event={"ID":"88577409-7021-4e24-852b-4d8f6d0c512a","Type":"ContainerDied","Data":"a3c8c059b8978cc8025cac47da70d63974c31057e8558f6a6849cacd59debffe"} Dec 02 20:15:34 crc kubenswrapper[4796]: I1202 20:15:34.704113 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.266885 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.374220 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdk6p\" (UniqueName: \"kubernetes.io/projected/88577409-7021-4e24-852b-4d8f6d0c512a-kube-api-access-rdk6p\") pod \"88577409-7021-4e24-852b-4d8f6d0c512a\" (UID: \"88577409-7021-4e24-852b-4d8f6d0c512a\") " Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.374430 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88577409-7021-4e24-852b-4d8f6d0c512a-catalog-content\") pod \"88577409-7021-4e24-852b-4d8f6d0c512a\" (UID: \"88577409-7021-4e24-852b-4d8f6d0c512a\") " Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.374464 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88577409-7021-4e24-852b-4d8f6d0c512a-utilities\") pod \"88577409-7021-4e24-852b-4d8f6d0c512a\" (UID: \"88577409-7021-4e24-852b-4d8f6d0c512a\") " Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.376425 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88577409-7021-4e24-852b-4d8f6d0c512a-utilities" (OuterVolumeSpecName: "utilities") pod "88577409-7021-4e24-852b-4d8f6d0c512a" (UID: "88577409-7021-4e24-852b-4d8f6d0c512a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.385361 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88577409-7021-4e24-852b-4d8f6d0c512a-kube-api-access-rdk6p" (OuterVolumeSpecName: "kube-api-access-rdk6p") pod "88577409-7021-4e24-852b-4d8f6d0c512a" (UID: "88577409-7021-4e24-852b-4d8f6d0c512a"). InnerVolumeSpecName "kube-api-access-rdk6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.460333 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88577409-7021-4e24-852b-4d8f6d0c512a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88577409-7021-4e24-852b-4d8f6d0c512a" (UID: "88577409-7021-4e24-852b-4d8f6d0c512a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.478597 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdk6p\" (UniqueName: \"kubernetes.io/projected/88577409-7021-4e24-852b-4d8f6d0c512a-kube-api-access-rdk6p\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.478668 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88577409-7021-4e24-852b-4d8f6d0c512a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.478692 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88577409-7021-4e24-852b-4d8f6d0c512a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.649196 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fxg" event={"ID":"88577409-7021-4e24-852b-4d8f6d0c512a","Type":"ContainerDied","Data":"152f37cce76a1cf6dee323df9f1317fe4924bfad1a340f57859fb28f053c017a"} Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.649290 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65fxg" Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.649970 4796 scope.go:117] "RemoveContainer" containerID="a3c8c059b8978cc8025cac47da70d63974c31057e8558f6a6849cacd59debffe" Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.697981 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-65fxg"] Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.704825 4796 scope.go:117] "RemoveContainer" containerID="170e856145789db1378cbfc4831b3abc9a73c8181905525153a014934fa61551" Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.706683 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-65fxg"] Dec 02 20:15:35 crc kubenswrapper[4796]: I1202 20:15:35.733649 4796 scope.go:117] "RemoveContainer" containerID="d9035e4a79216ad12ee4a1704c868dd91d4297cc8b1867ef6cc25b21c102cd32" Dec 02 20:15:36 crc kubenswrapper[4796]: I1202 20:15:36.103768 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ks8z"] Dec 02 20:15:37 crc kubenswrapper[4796]: I1202 20:15:37.278213 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88577409-7021-4e24-852b-4d8f6d0c512a" path="/var/lib/kubelet/pods/88577409-7021-4e24-852b-4d8f6d0c512a/volumes" Dec 02 20:15:37 crc kubenswrapper[4796]: I1202 20:15:37.664831 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4ks8z" podUID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" containerName="registry-server" containerID="cri-o://f6ff36cf27c8ad9268c930f8bba2e853a982d6fd5781e25a1a99792d8f102249" gracePeriod=2 Dec 02 20:15:39 crc kubenswrapper[4796]: I1202 20:15:39.685058 4796 generic.go:334] "Generic (PLEG): container finished" podID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" containerID="f6ff36cf27c8ad9268c930f8bba2e853a982d6fd5781e25a1a99792d8f102249" exitCode=0 Dec 02 20:15:39 crc kubenswrapper[4796]: I1202 20:15:39.685241 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ks8z" event={"ID":"b6bddb9c-450c-4804-a59f-b1b290a74b9a","Type":"ContainerDied","Data":"f6ff36cf27c8ad9268c930f8bba2e853a982d6fd5781e25a1a99792d8f102249"} Dec 02 20:15:39 crc kubenswrapper[4796]: I1202 20:15:39.993822 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.154638 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7www5\" (UniqueName: \"kubernetes.io/projected/b6bddb9c-450c-4804-a59f-b1b290a74b9a-kube-api-access-7www5\") pod \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\" (UID: \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\") " Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.155050 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6bddb9c-450c-4804-a59f-b1b290a74b9a-catalog-content\") pod \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\" (UID: \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\") " Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.155104 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6bddb9c-450c-4804-a59f-b1b290a74b9a-utilities\") pod \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\" (UID: \"b6bddb9c-450c-4804-a59f-b1b290a74b9a\") " Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.156871 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6bddb9c-450c-4804-a59f-b1b290a74b9a-utilities" (OuterVolumeSpecName: "utilities") pod "b6bddb9c-450c-4804-a59f-b1b290a74b9a" (UID: "b6bddb9c-450c-4804-a59f-b1b290a74b9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.163512 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6bddb9c-450c-4804-a59f-b1b290a74b9a-kube-api-access-7www5" (OuterVolumeSpecName: "kube-api-access-7www5") pod "b6bddb9c-450c-4804-a59f-b1b290a74b9a" (UID: "b6bddb9c-450c-4804-a59f-b1b290a74b9a"). InnerVolumeSpecName "kube-api-access-7www5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.256557 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6bddb9c-450c-4804-a59f-b1b290a74b9a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.256608 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7www5\" (UniqueName: \"kubernetes.io/projected/b6bddb9c-450c-4804-a59f-b1b290a74b9a-kube-api-access-7www5\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.304841 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6bddb9c-450c-4804-a59f-b1b290a74b9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6bddb9c-450c-4804-a59f-b1b290a74b9a" (UID: "b6bddb9c-450c-4804-a59f-b1b290a74b9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.358358 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6bddb9c-450c-4804-a59f-b1b290a74b9a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.696040 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ks8z" event={"ID":"b6bddb9c-450c-4804-a59f-b1b290a74b9a","Type":"ContainerDied","Data":"d371a6d1f546479971e629824a87f3a9eba6111ebdabdd2a8804caa55c906b0f"} Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.696128 4796 scope.go:117] "RemoveContainer" containerID="f6ff36cf27c8ad9268c930f8bba2e853a982d6fd5781e25a1a99792d8f102249" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.696158 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ks8z" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.716835 4796 scope.go:117] "RemoveContainer" containerID="51a59cf56d9904e9dbfaba97c120076174f2df09f67407b8d6f64438f0952223" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.727738 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.727787 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.746575 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ks8z"] Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.754445 4796 scope.go:117] "RemoveContainer" containerID="3f22a112488e33b80f8d4996fcb33c80fd2b82d87c9e75663f84940083569210" Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.771325 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4ks8z"] Dec 02 20:15:40 crc kubenswrapper[4796]: I1202 20:15:40.793768 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:15:41 crc kubenswrapper[4796]: I1202 20:15:41.273153 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" path="/var/lib/kubelet/pods/b6bddb9c-450c-4804-a59f-b1b290a74b9a/volumes" Dec 02 20:15:41 crc kubenswrapper[4796]: I1202 20:15:41.742064 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:15:42 crc kubenswrapper[4796]: I1202 20:15:42.991807 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6tx4j"] Dec 02 20:15:43 crc kubenswrapper[4796]: I1202 20:15:43.697354 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kgxxm"] Dec 02 20:15:43 crc kubenswrapper[4796]: I1202 20:15:43.720740 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kgxxm" podUID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" containerName="registry-server" containerID="cri-o://1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a" gracePeriod=2 Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.133685 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.224458 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/839017d2-1f6e-4b5b-93ec-80175eabe5f8-kube-api-access-5cdlt\") pod \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\" (UID: \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\") " Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.224504 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/839017d2-1f6e-4b5b-93ec-80175eabe5f8-utilities\") pod \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\" (UID: \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\") " Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.224572 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/839017d2-1f6e-4b5b-93ec-80175eabe5f8-catalog-content\") pod \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\" (UID: \"839017d2-1f6e-4b5b-93ec-80175eabe5f8\") " Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.225635 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/839017d2-1f6e-4b5b-93ec-80175eabe5f8-utilities" (OuterVolumeSpecName: "utilities") pod "839017d2-1f6e-4b5b-93ec-80175eabe5f8" (UID: "839017d2-1f6e-4b5b-93ec-80175eabe5f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.232403 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/839017d2-1f6e-4b5b-93ec-80175eabe5f8-kube-api-access-5cdlt" (OuterVolumeSpecName: "kube-api-access-5cdlt") pod "839017d2-1f6e-4b5b-93ec-80175eabe5f8" (UID: "839017d2-1f6e-4b5b-93ec-80175eabe5f8"). InnerVolumeSpecName "kube-api-access-5cdlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.275944 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/839017d2-1f6e-4b5b-93ec-80175eabe5f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "839017d2-1f6e-4b5b-93ec-80175eabe5f8" (UID: "839017d2-1f6e-4b5b-93ec-80175eabe5f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.327468 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/839017d2-1f6e-4b5b-93ec-80175eabe5f8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.327526 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/839017d2-1f6e-4b5b-93ec-80175eabe5f8-kube-api-access-5cdlt\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.327539 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/839017d2-1f6e-4b5b-93ec-80175eabe5f8-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.728980 4796 generic.go:334] "Generic (PLEG): container finished" podID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" containerID="1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a" exitCode=0 Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.729146 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgxxm" event={"ID":"839017d2-1f6e-4b5b-93ec-80175eabe5f8","Type":"ContainerDied","Data":"1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a"} Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.729310 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgxxm" event={"ID":"839017d2-1f6e-4b5b-93ec-80175eabe5f8","Type":"ContainerDied","Data":"872a41c7aad584c15f2a6ec3692c56271ac003beccb701fb0d829abd819a9d81"} Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.729352 4796 scope.go:117] "RemoveContainer" containerID="1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.729189 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgxxm" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.762296 4796 scope.go:117] "RemoveContainer" containerID="f184802d349a8176c10ab25274fcd801db52d37a21ab911508debf6d5c2ebb1b" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.785181 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kgxxm"] Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.785235 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kgxxm"] Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.796215 4796 scope.go:117] "RemoveContainer" containerID="cf797c3586aab2981ba21c79961f65975cb2a107df657035c85905a2e2ad0ce3" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.817776 4796 scope.go:117] "RemoveContainer" containerID="1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a" Dec 02 20:15:44 crc kubenswrapper[4796]: E1202 20:15:44.818604 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a\": container with ID starting with 1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a not found: ID does not exist" containerID="1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.818640 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a"} err="failed to get container status \"1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a\": rpc error: code = NotFound desc = could not find container \"1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a\": container with ID starting with 1f527513d42265b661e7e84999b7929d894e87be3cbc0f7cb860c43162cbff0a not found: ID does not exist" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.818665 4796 scope.go:117] "RemoveContainer" containerID="f184802d349a8176c10ab25274fcd801db52d37a21ab911508debf6d5c2ebb1b" Dec 02 20:15:44 crc kubenswrapper[4796]: E1202 20:15:44.819333 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f184802d349a8176c10ab25274fcd801db52d37a21ab911508debf6d5c2ebb1b\": container with ID starting with f184802d349a8176c10ab25274fcd801db52d37a21ab911508debf6d5c2ebb1b not found: ID does not exist" containerID="f184802d349a8176c10ab25274fcd801db52d37a21ab911508debf6d5c2ebb1b" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.819374 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f184802d349a8176c10ab25274fcd801db52d37a21ab911508debf6d5c2ebb1b"} err="failed to get container status \"f184802d349a8176c10ab25274fcd801db52d37a21ab911508debf6d5c2ebb1b\": rpc error: code = NotFound desc = could not find container \"f184802d349a8176c10ab25274fcd801db52d37a21ab911508debf6d5c2ebb1b\": container with ID starting with f184802d349a8176c10ab25274fcd801db52d37a21ab911508debf6d5c2ebb1b not found: ID does not exist" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.819396 4796 scope.go:117] "RemoveContainer" containerID="cf797c3586aab2981ba21c79961f65975cb2a107df657035c85905a2e2ad0ce3" Dec 02 20:15:44 crc kubenswrapper[4796]: E1202 20:15:44.819696 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf797c3586aab2981ba21c79961f65975cb2a107df657035c85905a2e2ad0ce3\": container with ID starting with cf797c3586aab2981ba21c79961f65975cb2a107df657035c85905a2e2ad0ce3 not found: ID does not exist" containerID="cf797c3586aab2981ba21c79961f65975cb2a107df657035c85905a2e2ad0ce3" Dec 02 20:15:44 crc kubenswrapper[4796]: I1202 20:15:44.819717 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf797c3586aab2981ba21c79961f65975cb2a107df657035c85905a2e2ad0ce3"} err="failed to get container status \"cf797c3586aab2981ba21c79961f65975cb2a107df657035c85905a2e2ad0ce3\": rpc error: code = NotFound desc = could not find container \"cf797c3586aab2981ba21c79961f65975cb2a107df657035c85905a2e2ad0ce3\": container with ID starting with cf797c3586aab2981ba21c79961f65975cb2a107df657035c85905a2e2ad0ce3 not found: ID does not exist" Dec 02 20:15:45 crc kubenswrapper[4796]: I1202 20:15:45.278186 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" path="/var/lib/kubelet/pods/839017d2-1f6e-4b5b-93ec-80175eabe5f8/volumes" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.161703 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.163075 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" containerName="registry-server" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163100 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" containerName="registry-server" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.163135 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" containerName="extract-content" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163153 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" containerName="extract-content" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.163175 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" containerName="registry-server" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163187 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" containerName="registry-server" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.163207 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88577409-7021-4e24-852b-4d8f6d0c512a" containerName="extract-content" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163220 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="88577409-7021-4e24-852b-4d8f6d0c512a" containerName="extract-content" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.163235 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" containerName="extract-utilities" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163247 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" containerName="extract-utilities" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.163321 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44fde09-6008-44d4-999b-3ddf1c198ff7" containerName="registry-server" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163340 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44fde09-6008-44d4-999b-3ddf1c198ff7" containerName="registry-server" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.163403 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88577409-7021-4e24-852b-4d8f6d0c512a" containerName="extract-utilities" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163420 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="88577409-7021-4e24-852b-4d8f6d0c512a" containerName="extract-utilities" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.163441 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88577409-7021-4e24-852b-4d8f6d0c512a" containerName="registry-server" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163455 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="88577409-7021-4e24-852b-4d8f6d0c512a" containerName="registry-server" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.163476 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44fde09-6008-44d4-999b-3ddf1c198ff7" containerName="extract-content" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163489 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44fde09-6008-44d4-999b-3ddf1c198ff7" containerName="extract-content" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.163509 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44fde09-6008-44d4-999b-3ddf1c198ff7" containerName="extract-utilities" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163521 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44fde09-6008-44d4-999b-3ddf1c198ff7" containerName="extract-utilities" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.163540 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" containerName="extract-utilities" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163552 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" containerName="extract-utilities" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.163573 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" containerName="extract-content" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163584 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" containerName="extract-content" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163765 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44fde09-6008-44d4-999b-3ddf1c198ff7" containerName="registry-server" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163801 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="839017d2-1f6e-4b5b-93ec-80175eabe5f8" containerName="registry-server" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163822 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="88577409-7021-4e24-852b-4d8f6d0c512a" containerName="registry-server" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.163838 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6bddb9c-450c-4804-a59f-b1b290a74b9a" containerName="registry-server" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.164555 4796 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.165159 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6" gracePeriod=15 Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.165297 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952" gracePeriod=15 Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.165385 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db" gracePeriod=15 Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.165179 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc" gracePeriod=15 Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.165215 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.165550 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086" gracePeriod=15 Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.166048 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.166536 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.166568 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.166597 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.166612 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.166634 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.166649 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.166670 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.166686 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.166706 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.166722 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.166749 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.166763 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 20:15:58 crc kubenswrapper[4796]: E1202 20:15:58.166785 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.166798 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.167019 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.167039 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.167061 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.167080 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.167096 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.167122 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.239849 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.239922 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.239998 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.240191 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.240281 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.240330 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.240349 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.240387 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342105 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342193 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342223 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342241 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342279 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342299 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342319 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342388 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342438 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342469 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342493 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342515 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342537 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342557 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.342579 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.817726 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.819857 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.821457 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc" exitCode=0 Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.821498 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086" exitCode=0 Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.821518 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952" exitCode=0 Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.821534 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db" exitCode=2 Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.821612 4796 scope.go:117] "RemoveContainer" containerID="84501a2c1ea3e5d00ff5df82ad0663cd7b603ab35f11e10c372267c516b493ca" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.823919 4796 generic.go:334] "Generic (PLEG): container finished" podID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" containerID="b742a14c8082df2c05f23063ff3f327a062cb167597af4143deb75c6d1f68012" exitCode=0 Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.823994 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d","Type":"ContainerDied","Data":"b742a14c8082df2c05f23063ff3f327a062cb167597af4143deb75c6d1f68012"} Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.825566 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:15:58 crc kubenswrapper[4796]: I1202 20:15:58.826241 4796 status_manager.go:851] "Failed to get status for pod" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:15:59 crc kubenswrapper[4796]: I1202 20:15:59.848664 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.166204 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.168028 4796 status_manager.go:851] "Failed to get status for pod" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.274187 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-kubelet-dir\") pod \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\" (UID: \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\") " Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.274217 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" (UID: "9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.274400 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-var-lock\") pod \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\" (UID: \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\") " Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.274508 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-kube-api-access\") pod \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\" (UID: \"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d\") " Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.274777 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-var-lock" (OuterVolumeSpecName: "var-lock") pod "9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" (UID: "9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.275054 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.275091 4796 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-var-lock\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.293153 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" (UID: "9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:16:00 crc kubenswrapper[4796]: E1202 20:16:00.360869 4796 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.241:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" volumeName="registry-storage" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.377336 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.564995 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.566293 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.567096 4796 status_manager.go:851] "Failed to get status for pod" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.567661 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.683234 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.683145 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.684060 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.684136 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.684382 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.684290 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.685195 4796 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.685426 4796 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.685579 4796 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.860473 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.862394 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6" exitCode=0 Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.862503 4796 scope.go:117] "RemoveContainer" containerID="6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.862589 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.865711 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d","Type":"ContainerDied","Data":"ea078de209ec58fd5c53e3497b4021ddea37fcc5d8d0e41046ee0c0970b78ec1"} Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.865746 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea078de209ec58fd5c53e3497b4021ddea37fcc5d8d0e41046ee0c0970b78ec1" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.865772 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.882352 4796 scope.go:117] "RemoveContainer" containerID="670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.894235 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.894908 4796 status_manager.go:851] "Failed to get status for pod" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.897843 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.898318 4796 status_manager.go:851] "Failed to get status for pod" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.903766 4796 scope.go:117] "RemoveContainer" containerID="8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.926473 4796 scope.go:117] "RemoveContainer" containerID="a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.947837 4796 scope.go:117] "RemoveContainer" containerID="a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.965716 4796 scope.go:117] "RemoveContainer" containerID="7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.992982 4796 scope.go:117] "RemoveContainer" containerID="6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc" Dec 02 20:16:00 crc kubenswrapper[4796]: E1202 20:16:00.993927 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\": container with ID starting with 6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc not found: ID does not exist" containerID="6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.994123 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc"} err="failed to get container status \"6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\": rpc error: code = NotFound desc = could not find container \"6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc\": container with ID starting with 6f049aa2008cfdc6da8e47b030f65ed6d46bc87a25814c64291468ece23fabbc not found: ID does not exist" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.994346 4796 scope.go:117] "RemoveContainer" containerID="670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086" Dec 02 20:16:00 crc kubenswrapper[4796]: E1202 20:16:00.995020 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\": container with ID starting with 670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086 not found: ID does not exist" containerID="670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.995099 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086"} err="failed to get container status \"670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\": rpc error: code = NotFound desc = could not find container \"670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086\": container with ID starting with 670223dfb5ce7a753b2b4ad8829bccc3cd972aa1297c13c2c1a00f2036f3d086 not found: ID does not exist" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.995150 4796 scope.go:117] "RemoveContainer" containerID="8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952" Dec 02 20:16:00 crc kubenswrapper[4796]: E1202 20:16:00.995700 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\": container with ID starting with 8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952 not found: ID does not exist" containerID="8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.995765 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952"} err="failed to get container status \"8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\": rpc error: code = NotFound desc = could not find container \"8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952\": container with ID starting with 8849a74a294739ff22e8426a28c51313819a3833fdd3799114e72d0e55d63952 not found: ID does not exist" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.995813 4796 scope.go:117] "RemoveContainer" containerID="a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db" Dec 02 20:16:00 crc kubenswrapper[4796]: E1202 20:16:00.996839 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\": container with ID starting with a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db not found: ID does not exist" containerID="a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.996880 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db"} err="failed to get container status \"a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\": rpc error: code = NotFound desc = could not find container \"a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db\": container with ID starting with a3da2c67880283ee737b7c4467b7ed58f83cff0368218a463614588fa65737db not found: ID does not exist" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.996905 4796 scope.go:117] "RemoveContainer" containerID="a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6" Dec 02 20:16:00 crc kubenswrapper[4796]: E1202 20:16:00.997650 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\": container with ID starting with a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6 not found: ID does not exist" containerID="a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.997737 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6"} err="failed to get container status \"a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\": rpc error: code = NotFound desc = could not find container \"a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6\": container with ID starting with a70cc8cb554f62d3f756f3fcfd08355cf745464fb4d822372ce85c12282b1ec6 not found: ID does not exist" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.997783 4796 scope.go:117] "RemoveContainer" containerID="7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8" Dec 02 20:16:00 crc kubenswrapper[4796]: E1202 20:16:00.998493 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\": container with ID starting with 7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8 not found: ID does not exist" containerID="7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8" Dec 02 20:16:00 crc kubenswrapper[4796]: I1202 20:16:00.998547 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8"} err="failed to get container status \"7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\": rpc error: code = NotFound desc = could not find container \"7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8\": container with ID starting with 7d4a338aa3e008d6bf7fd56e6517261f0f2ddee8da01a89d70609e20457b5ad8 not found: ID does not exist" Dec 02 20:16:01 crc kubenswrapper[4796]: I1202 20:16:01.276031 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 02 20:16:02 crc kubenswrapper[4796]: E1202 20:16:02.431318 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:02 crc kubenswrapper[4796]: E1202 20:16:02.431947 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:02 crc kubenswrapper[4796]: E1202 20:16:02.432593 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:02 crc kubenswrapper[4796]: E1202 20:16:02.432970 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:02 crc kubenswrapper[4796]: E1202 20:16:02.433359 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:02 crc kubenswrapper[4796]: I1202 20:16:02.433394 4796 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 02 20:16:02 crc kubenswrapper[4796]: E1202 20:16:02.433592 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="200ms" Dec 02 20:16:02 crc kubenswrapper[4796]: E1202 20:16:02.634785 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="400ms" Dec 02 20:16:03 crc kubenswrapper[4796]: E1202 20:16:03.036640 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="800ms" Dec 02 20:16:03 crc kubenswrapper[4796]: E1202 20:16:03.205467 4796 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.241:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:16:03 crc kubenswrapper[4796]: I1202 20:16:03.206013 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:16:03 crc kubenswrapper[4796]: E1202 20:16:03.245950 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.241:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d7f44fb02d67a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 20:16:03.244873338 +0000 UTC m=+246.248248882,LastTimestamp:2025-12-02 20:16:03.244873338 +0000 UTC m=+246.248248882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 20:16:03 crc kubenswrapper[4796]: E1202 20:16:03.566702 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.241:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d7f44fb02d67a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 20:16:03.244873338 +0000 UTC m=+246.248248882,LastTimestamp:2025-12-02 20:16:03.244873338 +0000 UTC m=+246.248248882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 20:16:03 crc kubenswrapper[4796]: E1202 20:16:03.837943 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="1.6s" Dec 02 20:16:03 crc kubenswrapper[4796]: I1202 20:16:03.893341 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a736360610ec7ac94741394b199b9b8ed392405d412cef252442923585c41096"} Dec 02 20:16:03 crc kubenswrapper[4796]: I1202 20:16:03.893441 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bbfb44b7f4602e45549559a42a8d3748ed8972b7bd0f560e6bdb30c1558e9198"} Dec 02 20:16:03 crc kubenswrapper[4796]: I1202 20:16:03.894122 4796 status_manager.go:851] "Failed to get status for pod" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:03 crc kubenswrapper[4796]: E1202 20:16:03.894195 4796 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.241:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:16:05 crc kubenswrapper[4796]: E1202 20:16:05.439718 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="3.2s" Dec 02 20:16:07 crc kubenswrapper[4796]: I1202 20:16:07.268339 4796 status_manager.go:851] "Failed to get status for pod" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.018427 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" podUID="d39a70e8-5d53-431a-8413-bbb2041fc8dd" containerName="oauth-openshift" containerID="cri-o://72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112" gracePeriod=15 Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.431533 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.432921 4796 status_manager.go:851] "Failed to get status for pod" podUID="d39a70e8-5d53-431a-8413-bbb2041fc8dd" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-6tx4j\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.433389 4796 status_manager.go:851] "Failed to get status for pod" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.520823 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d39a70e8-5d53-431a-8413-bbb2041fc8dd-audit-dir\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.520886 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-provider-selection\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.520908 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-login\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.520946 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-session\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.520977 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-idp-0-file-data\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.520963 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39a70e8-5d53-431a-8413-bbb2041fc8dd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.520999 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-trusted-ca-bundle\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.521028 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-audit-policies\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.521046 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-error\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.521064 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-service-ca\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.521083 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-ocp-branding-template\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.521103 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-serving-cert\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.521119 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-cliconfig\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.521139 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9mhn\" (UniqueName: \"kubernetes.io/projected/d39a70e8-5d53-431a-8413-bbb2041fc8dd-kube-api-access-v9mhn\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.521155 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-router-certs\") pod \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\" (UID: \"d39a70e8-5d53-431a-8413-bbb2041fc8dd\") " Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.521299 4796 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d39a70e8-5d53-431a-8413-bbb2041fc8dd-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.522157 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.522181 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.522166 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.522904 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.528857 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39a70e8-5d53-431a-8413-bbb2041fc8dd-kube-api-access-v9mhn" (OuterVolumeSpecName: "kube-api-access-v9mhn") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "kube-api-access-v9mhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.528964 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.529333 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.529714 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.532089 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.532547 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.532813 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.532829 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.533072 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d39a70e8-5d53-431a-8413-bbb2041fc8dd" (UID: "d39a70e8-5d53-431a-8413-bbb2041fc8dd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622494 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622532 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622546 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622559 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622571 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622582 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622593 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622603 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622613 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622622 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9mhn\" (UniqueName: \"kubernetes.io/projected/d39a70e8-5d53-431a-8413-bbb2041fc8dd-kube-api-access-v9mhn\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622631 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622641 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.622652 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d39a70e8-5d53-431a-8413-bbb2041fc8dd-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:08 crc kubenswrapper[4796]: E1202 20:16:08.641606 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="6.4s" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.956852 4796 generic.go:334] "Generic (PLEG): container finished" podID="d39a70e8-5d53-431a-8413-bbb2041fc8dd" containerID="72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112" exitCode=0 Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.956909 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" event={"ID":"d39a70e8-5d53-431a-8413-bbb2041fc8dd","Type":"ContainerDied","Data":"72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112"} Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.956940 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.957471 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" event={"ID":"d39a70e8-5d53-431a-8413-bbb2041fc8dd","Type":"ContainerDied","Data":"661cf9c9e20cc34fc83383b90bbc61d63b7f159fea2daeede1974a5a8347d247"} Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.957585 4796 scope.go:117] "RemoveContainer" containerID="72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.958487 4796 status_manager.go:851] "Failed to get status for pod" podUID="d39a70e8-5d53-431a-8413-bbb2041fc8dd" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-6tx4j\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.959068 4796 status_manager.go:851] "Failed to get status for pod" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.979882 4796 scope.go:117] "RemoveContainer" containerID="72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112" Dec 02 20:16:08 crc kubenswrapper[4796]: E1202 20:16:08.980319 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112\": container with ID starting with 72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112 not found: ID does not exist" containerID="72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.980354 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112"} err="failed to get container status \"72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112\": rpc error: code = NotFound desc = could not find container \"72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112\": container with ID starting with 72c213f9ca2cac551450cb537f6a4775d532f541bc91084948551b928d75a112 not found: ID does not exist" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.985579 4796 status_manager.go:851] "Failed to get status for pod" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:08 crc kubenswrapper[4796]: I1202 20:16:08.985791 4796 status_manager.go:851] "Failed to get status for pod" podUID="d39a70e8-5d53-431a-8413-bbb2041fc8dd" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-6tx4j\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.264239 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.265450 4796 status_manager.go:851] "Failed to get status for pod" podUID="d39a70e8-5d53-431a-8413-bbb2041fc8dd" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-6tx4j\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.266089 4796 status_manager.go:851] "Failed to get status for pod" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.285442 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="423e1ef0-d2b1-442c-8751-372d2de26a00" Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.285505 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="423e1ef0-d2b1-442c-8751-372d2de26a00" Dec 02 20:16:09 crc kubenswrapper[4796]: E1202 20:16:09.286087 4796 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.287686 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:09 crc kubenswrapper[4796]: W1202 20:16:09.325244 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-e732da2aaec58125039a70912318840394a637b5233b96f5453cd49a24e90166 WatchSource:0}: Error finding container e732da2aaec58125039a70912318840394a637b5233b96f5453cd49a24e90166: Status 404 returned error can't find the container with id e732da2aaec58125039a70912318840394a637b5233b96f5453cd49a24e90166 Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.970466 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"49ce10109405f58c36b32409ab90670ca57d66426f1882e3d7cf8fdecee09b27"} Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.970526 4796 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="49ce10109405f58c36b32409ab90670ca57d66426f1882e3d7cf8fdecee09b27" exitCode=0 Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.970930 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e732da2aaec58125039a70912318840394a637b5233b96f5453cd49a24e90166"} Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.971205 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="423e1ef0-d2b1-442c-8751-372d2de26a00" Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.971229 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="423e1ef0-d2b1-442c-8751-372d2de26a00" Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.971864 4796 status_manager.go:851] "Failed to get status for pod" podUID="d39a70e8-5d53-431a-8413-bbb2041fc8dd" pod="openshift-authentication/oauth-openshift-558db77b4-6tx4j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-6tx4j\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:09 crc kubenswrapper[4796]: E1202 20:16:09.971854 4796 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:09 crc kubenswrapper[4796]: I1202 20:16:09.972441 4796 status_manager.go:851] "Failed to get status for pod" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Dec 02 20:16:10 crc kubenswrapper[4796]: I1202 20:16:10.978922 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1509305533af181cc2741b8a723b53f6f06c1c6f4ae14dc3ddbef446d63d78b1"} Dec 02 20:16:10 crc kubenswrapper[4796]: I1202 20:16:10.979164 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f72f7432f38f9cedcd90d5d40e521324b9d5781fe4f08d9ae2ac4d8fd5ad9d1f"} Dec 02 20:16:10 crc kubenswrapper[4796]: I1202 20:16:10.979179 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fe9e4a14691a076b60e3b56d1cb5afb13c177086b943230b5b3ce629ff52afdd"} Dec 02 20:16:11 crc kubenswrapper[4796]: I1202 20:16:11.990389 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0feeea5df55df794e63a90a3da3e8a227650240c7a877a435a56a29b77333e22"} Dec 02 20:16:11 crc kubenswrapper[4796]: I1202 20:16:11.991512 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:11 crc kubenswrapper[4796]: I1202 20:16:11.991602 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"879501718fc874370a2791ee54f297d5b3608d563bb5bf928ebe3c14086d3ac5"} Dec 02 20:16:11 crc kubenswrapper[4796]: I1202 20:16:11.991285 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="423e1ef0-d2b1-442c-8751-372d2de26a00" Dec 02 20:16:11 crc kubenswrapper[4796]: I1202 20:16:11.991758 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="423e1ef0-d2b1-442c-8751-372d2de26a00" Dec 02 20:16:14 crc kubenswrapper[4796]: I1202 20:16:14.008890 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 20:16:14 crc kubenswrapper[4796]: I1202 20:16:14.009500 4796 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840" exitCode=1 Dec 02 20:16:14 crc kubenswrapper[4796]: I1202 20:16:14.009560 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840"} Dec 02 20:16:14 crc kubenswrapper[4796]: I1202 20:16:14.010921 4796 scope.go:117] "RemoveContainer" containerID="3931263dfbba517a3f109c2fec313fdb38b7232f4384fbf922d2de2f40d95840" Dec 02 20:16:14 crc kubenswrapper[4796]: I1202 20:16:14.289782 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:14 crc kubenswrapper[4796]: I1202 20:16:14.290286 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:14 crc kubenswrapper[4796]: I1202 20:16:14.298569 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:15 crc kubenswrapper[4796]: I1202 20:16:15.022782 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 20:16:15 crc kubenswrapper[4796]: I1202 20:16:15.022855 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bd7b62f5045efaafbea0ef604a062bbaa9e5145d430c2138250df0e63063bae2"} Dec 02 20:16:17 crc kubenswrapper[4796]: I1202 20:16:17.003918 4796 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:17 crc kubenswrapper[4796]: I1202 20:16:17.275103 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="584738ae-ac7a-440b-8b9a-258ac1726d47" Dec 02 20:16:18 crc kubenswrapper[4796]: I1202 20:16:18.046440 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Dec 02 20:16:18 crc kubenswrapper[4796]: I1202 20:16:18.049546 4796 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="879501718fc874370a2791ee54f297d5b3608d563bb5bf928ebe3c14086d3ac5" exitCode=255 Dec 02 20:16:18 crc kubenswrapper[4796]: I1202 20:16:18.049648 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"879501718fc874370a2791ee54f297d5b3608d563bb5bf928ebe3c14086d3ac5"} Dec 02 20:16:18 crc kubenswrapper[4796]: I1202 20:16:18.050372 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="423e1ef0-d2b1-442c-8751-372d2de26a00" Dec 02 20:16:18 crc kubenswrapper[4796]: I1202 20:16:18.050461 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="423e1ef0-d2b1-442c-8751-372d2de26a00" Dec 02 20:16:18 crc kubenswrapper[4796]: I1202 20:16:18.055487 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="584738ae-ac7a-440b-8b9a-258ac1726d47" Dec 02 20:16:18 crc kubenswrapper[4796]: I1202 20:16:18.056391 4796 scope.go:117] "RemoveContainer" containerID="879501718fc874370a2791ee54f297d5b3608d563bb5bf928ebe3c14086d3ac5" Dec 02 20:16:18 crc kubenswrapper[4796]: I1202 20:16:18.063924 4796 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://fe9e4a14691a076b60e3b56d1cb5afb13c177086b943230b5b3ce629ff52afdd" Dec 02 20:16:18 crc kubenswrapper[4796]: I1202 20:16:18.063953 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:19 crc kubenswrapper[4796]: I1202 20:16:19.059495 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Dec 02 20:16:19 crc kubenswrapper[4796]: I1202 20:16:19.063627 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"160f54fd112ade69f782be9159b1958559cac3d16c19873cb73c85d1204308f2"} Dec 02 20:16:19 crc kubenswrapper[4796]: I1202 20:16:19.063947 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="423e1ef0-d2b1-442c-8751-372d2de26a00" Dec 02 20:16:19 crc kubenswrapper[4796]: I1202 20:16:19.063979 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="423e1ef0-d2b1-442c-8751-372d2de26a00" Dec 02 20:16:19 crc kubenswrapper[4796]: I1202 20:16:19.064348 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:19 crc kubenswrapper[4796]: I1202 20:16:19.069723 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="584738ae-ac7a-440b-8b9a-258ac1726d47" Dec 02 20:16:20 crc kubenswrapper[4796]: I1202 20:16:20.070140 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="423e1ef0-d2b1-442c-8751-372d2de26a00" Dec 02 20:16:20 crc kubenswrapper[4796]: I1202 20:16:20.070180 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="423e1ef0-d2b1-442c-8751-372d2de26a00" Dec 02 20:16:20 crc kubenswrapper[4796]: I1202 20:16:20.074124 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="584738ae-ac7a-440b-8b9a-258ac1726d47" Dec 02 20:16:21 crc kubenswrapper[4796]: I1202 20:16:21.194944 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:16:21 crc kubenswrapper[4796]: I1202 20:16:21.423099 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:16:21 crc kubenswrapper[4796]: I1202 20:16:21.431106 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:16:23 crc kubenswrapper[4796]: I1202 20:16:23.399075 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 02 20:16:24 crc kubenswrapper[4796]: I1202 20:16:24.628059 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 02 20:16:27 crc kubenswrapper[4796]: I1202 20:16:27.089723 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 02 20:16:27 crc kubenswrapper[4796]: I1202 20:16:27.875098 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 02 20:16:27 crc kubenswrapper[4796]: I1202 20:16:27.904435 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 02 20:16:28 crc kubenswrapper[4796]: I1202 20:16:28.318657 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 02 20:16:28 crc kubenswrapper[4796]: I1202 20:16:28.468665 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 02 20:16:28 crc kubenswrapper[4796]: I1202 20:16:28.552669 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 02 20:16:28 crc kubenswrapper[4796]: I1202 20:16:28.764197 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 02 20:16:28 crc kubenswrapper[4796]: I1202 20:16:28.797849 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 02 20:16:28 crc kubenswrapper[4796]: I1202 20:16:28.953448 4796 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 02 20:16:29 crc kubenswrapper[4796]: I1202 20:16:29.085287 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 02 20:16:29 crc kubenswrapper[4796]: I1202 20:16:29.200650 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 02 20:16:29 crc kubenswrapper[4796]: I1202 20:16:29.214909 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 02 20:16:29 crc kubenswrapper[4796]: I1202 20:16:29.398488 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 02 20:16:29 crc kubenswrapper[4796]: I1202 20:16:29.646523 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 02 20:16:29 crc kubenswrapper[4796]: I1202 20:16:29.950448 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 02 20:16:30 crc kubenswrapper[4796]: I1202 20:16:30.537520 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 02 20:16:30 crc kubenswrapper[4796]: I1202 20:16:30.707588 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 20:16:30 crc kubenswrapper[4796]: I1202 20:16:30.798899 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 02 20:16:31 crc kubenswrapper[4796]: I1202 20:16:31.092395 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 02 20:16:31 crc kubenswrapper[4796]: I1202 20:16:31.199598 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 20:16:31 crc kubenswrapper[4796]: I1202 20:16:31.478840 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 02 20:16:31 crc kubenswrapper[4796]: I1202 20:16:31.501571 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 20:16:31 crc kubenswrapper[4796]: I1202 20:16:31.546673 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 02 20:16:31 crc kubenswrapper[4796]: I1202 20:16:31.575627 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 02 20:16:31 crc kubenswrapper[4796]: I1202 20:16:31.698591 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 02 20:16:31 crc kubenswrapper[4796]: I1202 20:16:31.751152 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 02 20:16:31 crc kubenswrapper[4796]: I1202 20:16:31.839122 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 02 20:16:31 crc kubenswrapper[4796]: I1202 20:16:31.913382 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 02 20:16:31 crc kubenswrapper[4796]: I1202 20:16:31.943150 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 02 20:16:32 crc kubenswrapper[4796]: I1202 20:16:32.027361 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 02 20:16:32 crc kubenswrapper[4796]: I1202 20:16:32.346514 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 02 20:16:32 crc kubenswrapper[4796]: I1202 20:16:32.463684 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 02 20:16:32 crc kubenswrapper[4796]: I1202 20:16:32.547484 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 02 20:16:32 crc kubenswrapper[4796]: I1202 20:16:32.782673 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 02 20:16:32 crc kubenswrapper[4796]: I1202 20:16:32.866034 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 02 20:16:32 crc kubenswrapper[4796]: I1202 20:16:32.911287 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.028381 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.041200 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.128915 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.199529 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.300517 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.309915 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.384512 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.431959 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.490688 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.574672 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.583192 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.660615 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.662880 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.733473 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.841035 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.851949 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.940769 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 02 20:16:33 crc kubenswrapper[4796]: I1202 20:16:33.960934 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.013830 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.017856 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.086159 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.188385 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.220391 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.222644 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.328340 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.441527 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.443438 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.459310 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.519854 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.557391 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.591469 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.594400 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.716375 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.732055 4796 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.751793 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.776786 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.791778 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.813045 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.884157 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.903550 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 02 20:16:34 crc kubenswrapper[4796]: I1202 20:16:34.914429 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.075815 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.108050 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.116790 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.204938 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.276172 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.337519 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.380592 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.436955 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.571354 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.593481 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.646389 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.762066 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.841606 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 02 20:16:35 crc kubenswrapper[4796]: I1202 20:16:35.849723 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 02 20:16:36 crc kubenswrapper[4796]: I1202 20:16:36.008072 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 02 20:16:36 crc kubenswrapper[4796]: I1202 20:16:36.033635 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 02 20:16:36 crc kubenswrapper[4796]: I1202 20:16:36.094853 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 02 20:16:36 crc kubenswrapper[4796]: I1202 20:16:36.113495 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 02 20:16:36 crc kubenswrapper[4796]: I1202 20:16:36.207087 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 02 20:16:36 crc kubenswrapper[4796]: I1202 20:16:36.258022 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 02 20:16:36 crc kubenswrapper[4796]: I1202 20:16:36.345464 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 02 20:16:36 crc kubenswrapper[4796]: I1202 20:16:36.488182 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 02 20:16:36 crc kubenswrapper[4796]: I1202 20:16:36.515477 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 02 20:16:36 crc kubenswrapper[4796]: I1202 20:16:36.632999 4796 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 02 20:16:36 crc kubenswrapper[4796]: I1202 20:16:36.762008 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 02 20:16:36 crc kubenswrapper[4796]: I1202 20:16:36.891014 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.068000 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.142376 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.171047 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.185828 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.191726 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.310567 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.349328 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.408658 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.413484 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.476349 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.543790 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.615648 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.628931 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.690099 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.757098 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.818874 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.938496 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 02 20:16:37 crc kubenswrapper[4796]: I1202 20:16:37.997619 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.016924 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.036537 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.069487 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.112299 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.174179 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.176473 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.253655 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.286654 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.360618 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.414797 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.424228 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.477707 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.610486 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.650024 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.652937 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.694411 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.697941 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.710947 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.805674 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.836024 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.847560 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.857472 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.873206 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.904602 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.970710 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 20:16:38 crc kubenswrapper[4796]: I1202 20:16:38.970916 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.034796 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.035061 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.213749 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.254810 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.302609 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.311034 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.322845 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.343546 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.386537 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.521199 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.585391 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.605358 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 02 20:16:39 crc kubenswrapper[4796]: I1202 20:16:39.623507 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.010888 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.031160 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.065154 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.081080 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.116595 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.166958 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.205987 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.355589 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.358454 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.378591 4796 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.444193 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.446504 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.506404 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.618077 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.618863 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.649943 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.669184 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.736583 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.794714 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.797099 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.875652 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 02 20:16:40 crc kubenswrapper[4796]: I1202 20:16:40.912095 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.036161 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.062004 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.074297 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.121876 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.150192 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.232342 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.253855 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.340234 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.358311 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.379692 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.391393 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.478961 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.479866 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.486392 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.654317 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.657166 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.717123 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.796972 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.835538 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.863987 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.929575 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.968702 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 02 20:16:41 crc kubenswrapper[4796]: I1202 20:16:41.996173 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 02 20:16:42 crc kubenswrapper[4796]: I1202 20:16:42.006809 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 02 20:16:42 crc kubenswrapper[4796]: I1202 20:16:42.042949 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 02 20:16:42 crc kubenswrapper[4796]: I1202 20:16:42.052694 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 20:16:42 crc kubenswrapper[4796]: I1202 20:16:42.148738 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 02 20:16:42 crc kubenswrapper[4796]: I1202 20:16:42.482898 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 02 20:16:42 crc kubenswrapper[4796]: I1202 20:16:42.511529 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 02 20:16:42 crc kubenswrapper[4796]: I1202 20:16:42.671333 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 02 20:16:42 crc kubenswrapper[4796]: I1202 20:16:42.849982 4796 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 02 20:16:42 crc kubenswrapper[4796]: I1202 20:16:42.858394 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-6tx4j"] Dec 02 20:16:42 crc kubenswrapper[4796]: I1202 20:16:42.858502 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 20:16:42 crc kubenswrapper[4796]: I1202 20:16:42.865539 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 20:16:42 crc kubenswrapper[4796]: I1202 20:16:42.883600 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.883575929 podStartE2EDuration="25.883575929s" podCreationTimestamp="2025-12-02 20:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:16:42.879925237 +0000 UTC m=+285.883300801" watchObservedRunningTime="2025-12-02 20:16:42.883575929 +0000 UTC m=+285.886951473" Dec 02 20:16:43 crc kubenswrapper[4796]: I1202 20:16:43.084935 4796 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 02 20:16:43 crc kubenswrapper[4796]: I1202 20:16:43.099185 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 02 20:16:43 crc kubenswrapper[4796]: I1202 20:16:43.149740 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 02 20:16:43 crc kubenswrapper[4796]: I1202 20:16:43.274735 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39a70e8-5d53-431a-8413-bbb2041fc8dd" path="/var/lib/kubelet/pods/d39a70e8-5d53-431a-8413-bbb2041fc8dd/volumes" Dec 02 20:16:43 crc kubenswrapper[4796]: I1202 20:16:43.290119 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 02 20:16:43 crc kubenswrapper[4796]: I1202 20:16:43.449635 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 02 20:16:43 crc kubenswrapper[4796]: I1202 20:16:43.473310 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 02 20:16:43 crc kubenswrapper[4796]: I1202 20:16:43.756716 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 02 20:16:43 crc kubenswrapper[4796]: I1202 20:16:43.784992 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 02 20:16:43 crc kubenswrapper[4796]: I1202 20:16:43.911884 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.073763 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.238625 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.500499 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.509854 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd"] Dec 02 20:16:44 crc kubenswrapper[4796]: E1202 20:16:44.510486 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39a70e8-5d53-431a-8413-bbb2041fc8dd" containerName="oauth-openshift" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.510515 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39a70e8-5d53-431a-8413-bbb2041fc8dd" containerName="oauth-openshift" Dec 02 20:16:44 crc kubenswrapper[4796]: E1202 20:16:44.510556 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" containerName="installer" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.510571 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" containerName="installer" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.510797 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39a70e8-5d53-431a-8413-bbb2041fc8dd" containerName="oauth-openshift" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.510817 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7746b6-206d-4c38-8f34-ba7b3f9e6e2d" containerName="installer" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.511669 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.515011 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.515223 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.515088 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.515176 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.515202 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.524037 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.524328 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.524460 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd"] Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.524801 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.524989 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.525170 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.527391 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.527822 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.528984 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.542391 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.555978 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.557043 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.574010 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.574227 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-user-template-login\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.574471 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.574660 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.574825 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d36fd57-f770-440c-aadc-7364b0b99665-audit-policies\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.574951 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.575095 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.575209 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.575338 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-session\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.575460 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d36fd57-f770-440c-aadc-7364b0b99665-audit-dir\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.575571 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-service-ca\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.575735 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-user-template-error\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.575926 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-router-certs\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.576053 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtbbn\" (UniqueName: \"kubernetes.io/projected/9d36fd57-f770-440c-aadc-7364b0b99665-kube-api-access-vtbbn\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.677217 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d36fd57-f770-440c-aadc-7364b0b99665-audit-dir\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.677344 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-service-ca\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.677396 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-user-template-error\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.677416 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d36fd57-f770-440c-aadc-7364b0b99665-audit-dir\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.677440 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-router-certs\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.678169 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtbbn\" (UniqueName: \"kubernetes.io/projected/9d36fd57-f770-440c-aadc-7364b0b99665-kube-api-access-vtbbn\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.678445 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.678663 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-user-template-login\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.678836 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.679556 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-service-ca\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.680119 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.680508 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.680704 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d36fd57-f770-440c-aadc-7364b0b99665-audit-policies\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.680840 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.681216 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.681897 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.682061 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-session\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.681521 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d36fd57-f770-440c-aadc-7364b0b99665-audit-policies\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.683002 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.689003 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.689404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-user-template-login\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.689595 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.689606 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-router-certs\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.690463 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-session\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.690468 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.692412 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-user-template-error\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.692873 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9d36fd57-f770-440c-aadc-7364b0b99665-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.700411 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtbbn\" (UniqueName: \"kubernetes.io/projected/9d36fd57-f770-440c-aadc-7364b0b99665-kube-api-access-vtbbn\") pod \"oauth-openshift-57bcd9fbb-tc6zd\" (UID: \"9d36fd57-f770-440c-aadc-7364b0b99665\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.849395 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:44 crc kubenswrapper[4796]: I1202 20:16:44.860690 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 02 20:16:45 crc kubenswrapper[4796]: I1202 20:16:45.116214 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd"] Dec 02 20:16:45 crc kubenswrapper[4796]: I1202 20:16:45.200686 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 02 20:16:45 crc kubenswrapper[4796]: I1202 20:16:45.243723 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 02 20:16:45 crc kubenswrapper[4796]: I1202 20:16:45.247376 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" event={"ID":"9d36fd57-f770-440c-aadc-7364b0b99665","Type":"ContainerStarted","Data":"fca35ce1580215e7f3787bf7eda3c30c7c812ea9c54141e7d8c869cbeef64f3e"} Dec 02 20:16:45 crc kubenswrapper[4796]: I1202 20:16:45.510758 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 02 20:16:45 crc kubenswrapper[4796]: I1202 20:16:45.518776 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 02 20:16:45 crc kubenswrapper[4796]: I1202 20:16:45.589081 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 02 20:16:45 crc kubenswrapper[4796]: I1202 20:16:45.966298 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 02 20:16:46 crc kubenswrapper[4796]: I1202 20:16:46.021441 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 20:16:46 crc kubenswrapper[4796]: I1202 20:16:46.258057 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" event={"ID":"9d36fd57-f770-440c-aadc-7364b0b99665","Type":"ContainerStarted","Data":"c807d664f777564fa4ef7b926db3ca4b88c5831e0458bef73b1b26a0db20f687"} Dec 02 20:16:46 crc kubenswrapper[4796]: I1202 20:16:46.258478 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:46 crc kubenswrapper[4796]: I1202 20:16:46.269000 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" Dec 02 20:16:46 crc kubenswrapper[4796]: I1202 20:16:46.296044 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-57bcd9fbb-tc6zd" podStartSLOduration=63.296013504 podStartE2EDuration="1m3.296013504s" podCreationTimestamp="2025-12-02 20:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:16:46.2886618 +0000 UTC m=+289.292037374" watchObservedRunningTime="2025-12-02 20:16:46.296013504 +0000 UTC m=+289.299389078" Dec 02 20:16:47 crc kubenswrapper[4796]: I1202 20:16:47.141594 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 02 20:16:48 crc kubenswrapper[4796]: I1202 20:16:48.051142 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 20:16:50 crc kubenswrapper[4796]: I1202 20:16:50.875049 4796 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 20:16:50 crc kubenswrapper[4796]: I1202 20:16:50.876070 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a736360610ec7ac94741394b199b9b8ed392405d412cef252442923585c41096" gracePeriod=5 Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.143193 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wq72"] Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.144091 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5wq72" podUID="22a52028-b443-4287-80e1-dfcffb2ba07e" containerName="registry-server" containerID="cri-o://b1f78422ededbb686ec95ff9c6ac1bced379acc2fb1d2940322f5d73ed37336e" gracePeriod=30 Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.154890 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n27mr"] Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.155313 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n27mr" podUID="5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" containerName="registry-server" containerID="cri-o://a3ae620d611ffb04c0cf54fbae2b42522af461907187e0604771b703214f50d8" gracePeriod=30 Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.159419 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8nh9j"] Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.159801 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" podUID="0a51e701-99f5-423c-a413-464a283751f4" containerName="marketplace-operator" containerID="cri-o://785f840a9f462359a0e84183160b5eb7724828dd93b41968a4543c2815e71dbb" gracePeriod=30 Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.175490 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26bqf"] Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.175746 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-26bqf" podUID="2f262dee-3028-4aa8-8ab3-8e4777368da0" containerName="registry-server" containerID="cri-o://2e7f188102737030e033828789dac0e145312ee1ce94a37f8f0a5a024f3a5aa8" gracePeriod=30 Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.192075 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cz22v"] Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.192357 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cz22v" podUID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" containerName="registry-server" containerID="cri-o://38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e" gracePeriod=30 Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.213914 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fgrrl"] Dec 02 20:16:51 crc kubenswrapper[4796]: E1202 20:16:51.214236 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.214271 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.214444 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.214999 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.218930 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fgrrl"] Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.289439 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93e137a7-0366-483c-afc7-549b29d6c04d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fgrrl\" (UID: \"93e137a7-0366-483c-afc7-549b29d6c04d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.289515 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5rd\" (UniqueName: \"kubernetes.io/projected/93e137a7-0366-483c-afc7-549b29d6c04d-kube-api-access-hr5rd\") pod \"marketplace-operator-79b997595-fgrrl\" (UID: \"93e137a7-0366-483c-afc7-549b29d6c04d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.289538 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93e137a7-0366-483c-afc7-549b29d6c04d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fgrrl\" (UID: \"93e137a7-0366-483c-afc7-549b29d6c04d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.299747 4796 generic.go:334] "Generic (PLEG): container finished" podID="0a51e701-99f5-423c-a413-464a283751f4" containerID="785f840a9f462359a0e84183160b5eb7724828dd93b41968a4543c2815e71dbb" exitCode=0 Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.299816 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" event={"ID":"0a51e701-99f5-423c-a413-464a283751f4","Type":"ContainerDied","Data":"785f840a9f462359a0e84183160b5eb7724828dd93b41968a4543c2815e71dbb"} Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.306813 4796 generic.go:334] "Generic (PLEG): container finished" podID="5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" containerID="a3ae620d611ffb04c0cf54fbae2b42522af461907187e0604771b703214f50d8" exitCode=0 Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.306909 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n27mr" event={"ID":"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8","Type":"ContainerDied","Data":"a3ae620d611ffb04c0cf54fbae2b42522af461907187e0604771b703214f50d8"} Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.309864 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f262dee-3028-4aa8-8ab3-8e4777368da0" containerID="2e7f188102737030e033828789dac0e145312ee1ce94a37f8f0a5a024f3a5aa8" exitCode=0 Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.309920 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26bqf" event={"ID":"2f262dee-3028-4aa8-8ab3-8e4777368da0","Type":"ContainerDied","Data":"2e7f188102737030e033828789dac0e145312ee1ce94a37f8f0a5a024f3a5aa8"} Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.311911 4796 generic.go:334] "Generic (PLEG): container finished" podID="22a52028-b443-4287-80e1-dfcffb2ba07e" containerID="b1f78422ededbb686ec95ff9c6ac1bced379acc2fb1d2940322f5d73ed37336e" exitCode=0 Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.311943 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wq72" event={"ID":"22a52028-b443-4287-80e1-dfcffb2ba07e","Type":"ContainerDied","Data":"b1f78422ededbb686ec95ff9c6ac1bced379acc2fb1d2940322f5d73ed37336e"} Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.391394 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93e137a7-0366-483c-afc7-549b29d6c04d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fgrrl\" (UID: \"93e137a7-0366-483c-afc7-549b29d6c04d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.391820 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr5rd\" (UniqueName: \"kubernetes.io/projected/93e137a7-0366-483c-afc7-549b29d6c04d-kube-api-access-hr5rd\") pod \"marketplace-operator-79b997595-fgrrl\" (UID: \"93e137a7-0366-483c-afc7-549b29d6c04d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.391841 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93e137a7-0366-483c-afc7-549b29d6c04d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fgrrl\" (UID: \"93e137a7-0366-483c-afc7-549b29d6c04d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.394864 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93e137a7-0366-483c-afc7-549b29d6c04d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fgrrl\" (UID: \"93e137a7-0366-483c-afc7-549b29d6c04d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.400205 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93e137a7-0366-483c-afc7-549b29d6c04d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fgrrl\" (UID: \"93e137a7-0366-483c-afc7-549b29d6c04d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.411068 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr5rd\" (UniqueName: \"kubernetes.io/projected/93e137a7-0366-483c-afc7-549b29d6c04d-kube-api-access-hr5rd\") pod \"marketplace-operator-79b997595-fgrrl\" (UID: \"93e137a7-0366-483c-afc7-549b29d6c04d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.607162 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.612729 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.633960 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.636114 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.639669 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.658909 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694589 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzv4k\" (UniqueName: \"kubernetes.io/projected/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-kube-api-access-mzv4k\") pod \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\" (UID: \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694658 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-utilities\") pod \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\" (UID: \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694684 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a51e701-99f5-423c-a413-464a283751f4-marketplace-operator-metrics\") pod \"0a51e701-99f5-423c-a413-464a283751f4\" (UID: \"0a51e701-99f5-423c-a413-464a283751f4\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694709 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5rzg\" (UniqueName: \"kubernetes.io/projected/0a51e701-99f5-423c-a413-464a283751f4-kube-api-access-l5rzg\") pod \"0a51e701-99f5-423c-a413-464a283751f4\" (UID: \"0a51e701-99f5-423c-a413-464a283751f4\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694741 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-utilities\") pod \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\" (UID: \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694765 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22a52028-b443-4287-80e1-dfcffb2ba07e-catalog-content\") pod \"22a52028-b443-4287-80e1-dfcffb2ba07e\" (UID: \"22a52028-b443-4287-80e1-dfcffb2ba07e\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694785 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc62z\" (UniqueName: \"kubernetes.io/projected/2f262dee-3028-4aa8-8ab3-8e4777368da0-kube-api-access-cc62z\") pod \"2f262dee-3028-4aa8-8ab3-8e4777368da0\" (UID: \"2f262dee-3028-4aa8-8ab3-8e4777368da0\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694817 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6q4l\" (UniqueName: \"kubernetes.io/projected/22a52028-b443-4287-80e1-dfcffb2ba07e-kube-api-access-w6q4l\") pod \"22a52028-b443-4287-80e1-dfcffb2ba07e\" (UID: \"22a52028-b443-4287-80e1-dfcffb2ba07e\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694852 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbmx7\" (UniqueName: \"kubernetes.io/projected/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-kube-api-access-pbmx7\") pod \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\" (UID: \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694878 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22a52028-b443-4287-80e1-dfcffb2ba07e-utilities\") pod \"22a52028-b443-4287-80e1-dfcffb2ba07e\" (UID: \"22a52028-b443-4287-80e1-dfcffb2ba07e\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694895 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f262dee-3028-4aa8-8ab3-8e4777368da0-utilities\") pod \"2f262dee-3028-4aa8-8ab3-8e4777368da0\" (UID: \"2f262dee-3028-4aa8-8ab3-8e4777368da0\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694922 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-catalog-content\") pod \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\" (UID: \"ffd16a4b-7b78-4954-8ea4-317fdfcedb55\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694940 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a51e701-99f5-423c-a413-464a283751f4-marketplace-trusted-ca\") pod \"0a51e701-99f5-423c-a413-464a283751f4\" (UID: \"0a51e701-99f5-423c-a413-464a283751f4\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694959 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f262dee-3028-4aa8-8ab3-8e4777368da0-catalog-content\") pod \"2f262dee-3028-4aa8-8ab3-8e4777368da0\" (UID: \"2f262dee-3028-4aa8-8ab3-8e4777368da0\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.694987 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-catalog-content\") pod \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\" (UID: \"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8\") " Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.698160 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-utilities" (OuterVolumeSpecName: "utilities") pod "ffd16a4b-7b78-4954-8ea4-317fdfcedb55" (UID: "ffd16a4b-7b78-4954-8ea4-317fdfcedb55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.700118 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-kube-api-access-pbmx7" (OuterVolumeSpecName: "kube-api-access-pbmx7") pod "5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" (UID: "5411f34e-b1b7-4e1a-9948-49eb9b59a5d8"). InnerVolumeSpecName "kube-api-access-pbmx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.701322 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-kube-api-access-mzv4k" (OuterVolumeSpecName: "kube-api-access-mzv4k") pod "ffd16a4b-7b78-4954-8ea4-317fdfcedb55" (UID: "ffd16a4b-7b78-4954-8ea4-317fdfcedb55"). InnerVolumeSpecName "kube-api-access-mzv4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.701364 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a51e701-99f5-423c-a413-464a283751f4-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "0a51e701-99f5-423c-a413-464a283751f4" (UID: "0a51e701-99f5-423c-a413-464a283751f4"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.701760 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a51e701-99f5-423c-a413-464a283751f4-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "0a51e701-99f5-423c-a413-464a283751f4" (UID: "0a51e701-99f5-423c-a413-464a283751f4"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.702354 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f262dee-3028-4aa8-8ab3-8e4777368da0-kube-api-access-cc62z" (OuterVolumeSpecName: "kube-api-access-cc62z") pod "2f262dee-3028-4aa8-8ab3-8e4777368da0" (UID: "2f262dee-3028-4aa8-8ab3-8e4777368da0"). InnerVolumeSpecName "kube-api-access-cc62z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.704479 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f262dee-3028-4aa8-8ab3-8e4777368da0-utilities" (OuterVolumeSpecName: "utilities") pod "2f262dee-3028-4aa8-8ab3-8e4777368da0" (UID: "2f262dee-3028-4aa8-8ab3-8e4777368da0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.706417 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22a52028-b443-4287-80e1-dfcffb2ba07e-kube-api-access-w6q4l" (OuterVolumeSpecName: "kube-api-access-w6q4l") pod "22a52028-b443-4287-80e1-dfcffb2ba07e" (UID: "22a52028-b443-4287-80e1-dfcffb2ba07e"). InnerVolumeSpecName "kube-api-access-w6q4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.706905 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-utilities" (OuterVolumeSpecName: "utilities") pod "5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" (UID: "5411f34e-b1b7-4e1a-9948-49eb9b59a5d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.715990 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22a52028-b443-4287-80e1-dfcffb2ba07e-utilities" (OuterVolumeSpecName: "utilities") pod "22a52028-b443-4287-80e1-dfcffb2ba07e" (UID: "22a52028-b443-4287-80e1-dfcffb2ba07e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.722592 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a51e701-99f5-423c-a413-464a283751f4-kube-api-access-l5rzg" (OuterVolumeSpecName: "kube-api-access-l5rzg") pod "0a51e701-99f5-423c-a413-464a283751f4" (UID: "0a51e701-99f5-423c-a413-464a283751f4"). InnerVolumeSpecName "kube-api-access-l5rzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.736824 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f262dee-3028-4aa8-8ab3-8e4777368da0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f262dee-3028-4aa8-8ab3-8e4777368da0" (UID: "2f262dee-3028-4aa8-8ab3-8e4777368da0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.753809 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22a52028-b443-4287-80e1-dfcffb2ba07e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22a52028-b443-4287-80e1-dfcffb2ba07e" (UID: "22a52028-b443-4287-80e1-dfcffb2ba07e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.763546 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" (UID: "5411f34e-b1b7-4e1a-9948-49eb9b59a5d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796841 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6q4l\" (UniqueName: \"kubernetes.io/projected/22a52028-b443-4287-80e1-dfcffb2ba07e-kube-api-access-w6q4l\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796873 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbmx7\" (UniqueName: \"kubernetes.io/projected/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-kube-api-access-pbmx7\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796884 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22a52028-b443-4287-80e1-dfcffb2ba07e-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796896 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f262dee-3028-4aa8-8ab3-8e4777368da0-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796918 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a51e701-99f5-423c-a413-464a283751f4-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796927 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f262dee-3028-4aa8-8ab3-8e4777368da0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796937 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796945 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzv4k\" (UniqueName: \"kubernetes.io/projected/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-kube-api-access-mzv4k\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796953 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796962 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0a51e701-99f5-423c-a413-464a283751f4-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796970 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5rzg\" (UniqueName: \"kubernetes.io/projected/0a51e701-99f5-423c-a413-464a283751f4-kube-api-access-l5rzg\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796978 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796985 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22a52028-b443-4287-80e1-dfcffb2ba07e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.796994 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc62z\" (UniqueName: \"kubernetes.io/projected/2f262dee-3028-4aa8-8ab3-8e4777368da0-kube-api-access-cc62z\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.829708 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffd16a4b-7b78-4954-8ea4-317fdfcedb55" (UID: "ffd16a4b-7b78-4954-8ea4-317fdfcedb55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.841696 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fgrrl"] Dec 02 20:16:51 crc kubenswrapper[4796]: I1202 20:16:51.898898 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd16a4b-7b78-4954-8ea4-317fdfcedb55-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.319634 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" event={"ID":"93e137a7-0366-483c-afc7-549b29d6c04d","Type":"ContainerStarted","Data":"074ded94655fefc41c2e679e68f470991c02cc206b8dae7f76d8487694d95876"} Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.319693 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" event={"ID":"93e137a7-0366-483c-afc7-549b29d6c04d","Type":"ContainerStarted","Data":"67d33c1241e95f0a44142e22dd2670ab5cc304676cb3df5779e5076486c8ede0"} Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.321196 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.323087 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" event={"ID":"0a51e701-99f5-423c-a413-464a283751f4","Type":"ContainerDied","Data":"9ecafb69e45a9072ec92df2f56ec20d6092c525532d382d4ec59a17442beba8b"} Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.323128 4796 scope.go:117] "RemoveContainer" containerID="785f840a9f462359a0e84183160b5eb7724828dd93b41968a4543c2815e71dbb" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.323249 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8nh9j" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.330467 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n27mr" event={"ID":"5411f34e-b1b7-4e1a-9948-49eb9b59a5d8","Type":"ContainerDied","Data":"389224d57d2d585e685f8f18f8ba28df8bffb91e86cd2602d8f11fd763ba4937"} Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.330594 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n27mr" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.338556 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26bqf" event={"ID":"2f262dee-3028-4aa8-8ab3-8e4777368da0","Type":"ContainerDied","Data":"817f7c22743daffd1aa255b591fe02e8458ebbd949d1d064ee18770207a0483c"} Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.338685 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26bqf" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.343003 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.343595 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wq72" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.343585 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wq72" event={"ID":"22a52028-b443-4287-80e1-dfcffb2ba07e","Type":"ContainerDied","Data":"378d7b0e2b6a9091601b851c19e7e22f0c9ab36751c60ebdacfb60dd764b112f"} Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.345283 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fgrrl" podStartSLOduration=1.345270843 podStartE2EDuration="1.345270843s" podCreationTimestamp="2025-12-02 20:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:16:52.345073147 +0000 UTC m=+295.348448691" watchObservedRunningTime="2025-12-02 20:16:52.345270843 +0000 UTC m=+295.348646387" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.353654 4796 scope.go:117] "RemoveContainer" containerID="a3ae620d611ffb04c0cf54fbae2b42522af461907187e0604771b703214f50d8" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.354580 4796 generic.go:334] "Generic (PLEG): container finished" podID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" containerID="38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e" exitCode=0 Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.354623 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cz22v" event={"ID":"ffd16a4b-7b78-4954-8ea4-317fdfcedb55","Type":"ContainerDied","Data":"38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e"} Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.354653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cz22v" event={"ID":"ffd16a4b-7b78-4954-8ea4-317fdfcedb55","Type":"ContainerDied","Data":"88eaa862d2772a0c57ea1a7c9dfffbf425542285db78868efc729a8d83acfe77"} Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.354674 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cz22v" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.377509 4796 scope.go:117] "RemoveContainer" containerID="72cc768e36e1a448eba20aba7f9a78a8b8cdf3a06f68b4a51fbac99925bc398b" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.421943 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8nh9j"] Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.431313 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8nh9j"] Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.436327 4796 scope.go:117] "RemoveContainer" containerID="7047fa7fe67cacae8f4b81242971975c50db66f0092caa1dbc3283ff5e9568af" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.451286 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n27mr"] Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.454345 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n27mr"] Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.457264 4796 scope.go:117] "RemoveContainer" containerID="2e7f188102737030e033828789dac0e145312ee1ce94a37f8f0a5a024f3a5aa8" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.464086 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26bqf"] Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.468388 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-26bqf"] Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.476479 4796 scope.go:117] "RemoveContainer" containerID="23d0491c8e75b23af7ee85a96bfcf43451beb7008c5c341a565038f4a9a94d9d" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.476901 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cz22v"] Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.484410 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cz22v"] Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.497014 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wq72"] Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.497669 4796 scope.go:117] "RemoveContainer" containerID="812c0e3acf6e8602f29a8ca752b9d153590ee7f229cc1a2b8c51e02ead8e23fa" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.500111 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5wq72"] Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.512509 4796 scope.go:117] "RemoveContainer" containerID="b1f78422ededbb686ec95ff9c6ac1bced379acc2fb1d2940322f5d73ed37336e" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.527467 4796 scope.go:117] "RemoveContainer" containerID="570304f3bf069b1960dd9a1a828a98886201f668c89ec92e15a68025cf60d46e" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.545772 4796 scope.go:117] "RemoveContainer" containerID="679acf89474aafaeb00440a2a756196a3257dd2b5b9a3efc2373d136e30b80a3" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.567552 4796 scope.go:117] "RemoveContainer" containerID="38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.578666 4796 scope.go:117] "RemoveContainer" containerID="fba7b6612b2008bc045d6351ffd5d22378832f240c59b0b01ea8998deb7011b5" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.591362 4796 scope.go:117] "RemoveContainer" containerID="2befcad334750966addc3cf37b9bb6cef720f033b405ed4ab930dccd01b42a95" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.605150 4796 scope.go:117] "RemoveContainer" containerID="38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e" Dec 02 20:16:52 crc kubenswrapper[4796]: E1202 20:16:52.605782 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e\": container with ID starting with 38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e not found: ID does not exist" containerID="38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.605824 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e"} err="failed to get container status \"38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e\": rpc error: code = NotFound desc = could not find container \"38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e\": container with ID starting with 38effd0c32d0dfbe93c3272cf4809fd97b96660f180bd12989617040b61e493e not found: ID does not exist" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.605850 4796 scope.go:117] "RemoveContainer" containerID="fba7b6612b2008bc045d6351ffd5d22378832f240c59b0b01ea8998deb7011b5" Dec 02 20:16:52 crc kubenswrapper[4796]: E1202 20:16:52.606231 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fba7b6612b2008bc045d6351ffd5d22378832f240c59b0b01ea8998deb7011b5\": container with ID starting with fba7b6612b2008bc045d6351ffd5d22378832f240c59b0b01ea8998deb7011b5 not found: ID does not exist" containerID="fba7b6612b2008bc045d6351ffd5d22378832f240c59b0b01ea8998deb7011b5" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.606268 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba7b6612b2008bc045d6351ffd5d22378832f240c59b0b01ea8998deb7011b5"} err="failed to get container status \"fba7b6612b2008bc045d6351ffd5d22378832f240c59b0b01ea8998deb7011b5\": rpc error: code = NotFound desc = could not find container \"fba7b6612b2008bc045d6351ffd5d22378832f240c59b0b01ea8998deb7011b5\": container with ID starting with fba7b6612b2008bc045d6351ffd5d22378832f240c59b0b01ea8998deb7011b5 not found: ID does not exist" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.606280 4796 scope.go:117] "RemoveContainer" containerID="2befcad334750966addc3cf37b9bb6cef720f033b405ed4ab930dccd01b42a95" Dec 02 20:16:52 crc kubenswrapper[4796]: E1202 20:16:52.606564 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2befcad334750966addc3cf37b9bb6cef720f033b405ed4ab930dccd01b42a95\": container with ID starting with 2befcad334750966addc3cf37b9bb6cef720f033b405ed4ab930dccd01b42a95 not found: ID does not exist" containerID="2befcad334750966addc3cf37b9bb6cef720f033b405ed4ab930dccd01b42a95" Dec 02 20:16:52 crc kubenswrapper[4796]: I1202 20:16:52.606604 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2befcad334750966addc3cf37b9bb6cef720f033b405ed4ab930dccd01b42a95"} err="failed to get container status \"2befcad334750966addc3cf37b9bb6cef720f033b405ed4ab930dccd01b42a95\": rpc error: code = NotFound desc = could not find container \"2befcad334750966addc3cf37b9bb6cef720f033b405ed4ab930dccd01b42a95\": container with ID starting with 2befcad334750966addc3cf37b9bb6cef720f033b405ed4ab930dccd01b42a95 not found: ID does not exist" Dec 02 20:16:53 crc kubenswrapper[4796]: I1202 20:16:53.273341 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a51e701-99f5-423c-a413-464a283751f4" path="/var/lib/kubelet/pods/0a51e701-99f5-423c-a413-464a283751f4/volumes" Dec 02 20:16:53 crc kubenswrapper[4796]: I1202 20:16:53.274568 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22a52028-b443-4287-80e1-dfcffb2ba07e" path="/var/lib/kubelet/pods/22a52028-b443-4287-80e1-dfcffb2ba07e/volumes" Dec 02 20:16:53 crc kubenswrapper[4796]: I1202 20:16:53.278316 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f262dee-3028-4aa8-8ab3-8e4777368da0" path="/var/lib/kubelet/pods/2f262dee-3028-4aa8-8ab3-8e4777368da0/volumes" Dec 02 20:16:53 crc kubenswrapper[4796]: I1202 20:16:53.279360 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" path="/var/lib/kubelet/pods/5411f34e-b1b7-4e1a-9948-49eb9b59a5d8/volumes" Dec 02 20:16:53 crc kubenswrapper[4796]: I1202 20:16:53.282444 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" path="/var/lib/kubelet/pods/ffd16a4b-7b78-4954-8ea4-317fdfcedb55/volumes" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.405383 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.405961 4796 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a736360610ec7ac94741394b199b9b8ed392405d412cef252442923585c41096" exitCode=137 Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.471468 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.471556 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.563321 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.563434 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.563505 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.563539 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.563568 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.563632 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.563557 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.563590 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.563705 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.563949 4796 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.563972 4796 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.563992 4796 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.564008 4796 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.572161 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:16:56 crc kubenswrapper[4796]: I1202 20:16:56.665940 4796 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 20:16:57 crc kubenswrapper[4796]: I1202 20:16:57.274444 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 02 20:16:57 crc kubenswrapper[4796]: I1202 20:16:57.414091 4796 scope.go:117] "RemoveContainer" containerID="a736360610ec7ac94741394b199b9b8ed392405d412cef252442923585c41096" Dec 02 20:16:57 crc kubenswrapper[4796]: I1202 20:16:57.415643 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 20:16:57 crc kubenswrapper[4796]: E1202 20:16:57.419134 4796 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/kube-apiserver-startup-monitor-crc_openshift-kube-apiserver_startup-monitor-a736360610ec7ac94741394b199b9b8ed392405d412cef252442923585c41096.log: no such file or directory" path="/var/log/containers/kube-apiserver-startup-monitor-crc_openshift-kube-apiserver_startup-monitor-a736360610ec7ac94741394b199b9b8ed392405d412cef252442923585c41096.log" Dec 02 20:17:00 crc kubenswrapper[4796]: E1202 20:17:00.562274 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bbfb44b7f4602e45549559a42a8d3748ed8972b7bd0f560e6bdb30c1558e9198\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice\": RecentStats: unable to find data in memory cache]" Dec 02 20:17:10 crc kubenswrapper[4796]: E1202 20:17:10.707625 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bbfb44b7f4602e45549559a42a8d3748ed8972b7bd0f560e6bdb30c1558e9198\": RecentStats: unable to find data in memory cache]" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.209770 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zjrkr"] Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.210434 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" podUID="a3c8abe1-552e-404c-be1f-88f30e467d8f" containerName="controller-manager" containerID="cri-o://9aeb65988071ca8766b793519ea21349d480a71b9630327503905cda58366127" gracePeriod=30 Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.315597 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg"] Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.315861 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" podUID="bbd52c61-04b7-424a-88c0-71653fd8d65e" containerName="route-controller-manager" containerID="cri-o://d6919fb9bb8a86dce7c2d99588e036e2cf5312f25b24ae8535d5a67bbe422f4f" gracePeriod=30 Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.512501 4796 generic.go:334] "Generic (PLEG): container finished" podID="bbd52c61-04b7-424a-88c0-71653fd8d65e" containerID="d6919fb9bb8a86dce7c2d99588e036e2cf5312f25b24ae8535d5a67bbe422f4f" exitCode=0 Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.512973 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" event={"ID":"bbd52c61-04b7-424a-88c0-71653fd8d65e","Type":"ContainerDied","Data":"d6919fb9bb8a86dce7c2d99588e036e2cf5312f25b24ae8535d5a67bbe422f4f"} Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.526513 4796 generic.go:334] "Generic (PLEG): container finished" podID="a3c8abe1-552e-404c-be1f-88f30e467d8f" containerID="9aeb65988071ca8766b793519ea21349d480a71b9630327503905cda58366127" exitCode=0 Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.526562 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" event={"ID":"a3c8abe1-552e-404c-be1f-88f30e467d8f","Type":"ContainerDied","Data":"9aeb65988071ca8766b793519ea21349d480a71b9630327503905cda58366127"} Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.639939 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.701856 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-client-ca\") pod \"a3c8abe1-552e-404c-be1f-88f30e467d8f\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.701924 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c8abe1-552e-404c-be1f-88f30e467d8f-serving-cert\") pod \"a3c8abe1-552e-404c-be1f-88f30e467d8f\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.701947 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jndnc\" (UniqueName: \"kubernetes.io/projected/a3c8abe1-552e-404c-be1f-88f30e467d8f-kube-api-access-jndnc\") pod \"a3c8abe1-552e-404c-be1f-88f30e467d8f\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.702003 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-proxy-ca-bundles\") pod \"a3c8abe1-552e-404c-be1f-88f30e467d8f\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.702041 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-config\") pod \"a3c8abe1-552e-404c-be1f-88f30e467d8f\" (UID: \"a3c8abe1-552e-404c-be1f-88f30e467d8f\") " Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.702737 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a3c8abe1-552e-404c-be1f-88f30e467d8f" (UID: "a3c8abe1-552e-404c-be1f-88f30e467d8f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.702810 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-config" (OuterVolumeSpecName: "config") pod "a3c8abe1-552e-404c-be1f-88f30e467d8f" (UID: "a3c8abe1-552e-404c-be1f-88f30e467d8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.702997 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-client-ca" (OuterVolumeSpecName: "client-ca") pod "a3c8abe1-552e-404c-be1f-88f30e467d8f" (UID: "a3c8abe1-552e-404c-be1f-88f30e467d8f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.708675 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c8abe1-552e-404c-be1f-88f30e467d8f-kube-api-access-jndnc" (OuterVolumeSpecName: "kube-api-access-jndnc") pod "a3c8abe1-552e-404c-be1f-88f30e467d8f" (UID: "a3c8abe1-552e-404c-be1f-88f30e467d8f"). InnerVolumeSpecName "kube-api-access-jndnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.709236 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c8abe1-552e-404c-be1f-88f30e467d8f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a3c8abe1-552e-404c-be1f-88f30e467d8f" (UID: "a3c8abe1-552e-404c-be1f-88f30e467d8f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.734074 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.803394 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd52c61-04b7-424a-88c0-71653fd8d65e-serving-cert\") pod \"bbd52c61-04b7-424a-88c0-71653fd8d65e\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.803470 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd52c61-04b7-424a-88c0-71653fd8d65e-config\") pod \"bbd52c61-04b7-424a-88c0-71653fd8d65e\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.803582 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5wbr\" (UniqueName: \"kubernetes.io/projected/bbd52c61-04b7-424a-88c0-71653fd8d65e-kube-api-access-x5wbr\") pod \"bbd52c61-04b7-424a-88c0-71653fd8d65e\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.803607 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbd52c61-04b7-424a-88c0-71653fd8d65e-client-ca\") pod \"bbd52c61-04b7-424a-88c0-71653fd8d65e\" (UID: \"bbd52c61-04b7-424a-88c0-71653fd8d65e\") " Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.804036 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.804062 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.804074 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3c8abe1-552e-404c-be1f-88f30e467d8f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.804084 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c8abe1-552e-404c-be1f-88f30e467d8f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.804094 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jndnc\" (UniqueName: \"kubernetes.io/projected/a3c8abe1-552e-404c-be1f-88f30e467d8f-kube-api-access-jndnc\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.804715 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd52c61-04b7-424a-88c0-71653fd8d65e-config" (OuterVolumeSpecName: "config") pod "bbd52c61-04b7-424a-88c0-71653fd8d65e" (UID: "bbd52c61-04b7-424a-88c0-71653fd8d65e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.805171 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd52c61-04b7-424a-88c0-71653fd8d65e-client-ca" (OuterVolumeSpecName: "client-ca") pod "bbd52c61-04b7-424a-88c0-71653fd8d65e" (UID: "bbd52c61-04b7-424a-88c0-71653fd8d65e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.810027 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd52c61-04b7-424a-88c0-71653fd8d65e-kube-api-access-x5wbr" (OuterVolumeSpecName: "kube-api-access-x5wbr") pod "bbd52c61-04b7-424a-88c0-71653fd8d65e" (UID: "bbd52c61-04b7-424a-88c0-71653fd8d65e"). InnerVolumeSpecName "kube-api-access-x5wbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.810265 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd52c61-04b7-424a-88c0-71653fd8d65e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bbd52c61-04b7-424a-88c0-71653fd8d65e" (UID: "bbd52c61-04b7-424a-88c0-71653fd8d65e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.905419 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5wbr\" (UniqueName: \"kubernetes.io/projected/bbd52c61-04b7-424a-88c0-71653fd8d65e-kube-api-access-x5wbr\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.905456 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbd52c61-04b7-424a-88c0-71653fd8d65e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.905467 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd52c61-04b7-424a-88c0-71653fd8d65e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:13 crc kubenswrapper[4796]: I1202 20:17:13.905476 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd52c61-04b7-424a-88c0-71653fd8d65e-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:14 crc kubenswrapper[4796]: I1202 20:17:14.535222 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" event={"ID":"bbd52c61-04b7-424a-88c0-71653fd8d65e","Type":"ContainerDied","Data":"8bdac55ad65b8982500776fb4946ecb291ceaccc904212f0824a6b5c7984283a"} Dec 02 20:17:14 crc kubenswrapper[4796]: I1202 20:17:14.535365 4796 scope.go:117] "RemoveContainer" containerID="d6919fb9bb8a86dce7c2d99588e036e2cf5312f25b24ae8535d5a67bbe422f4f" Dec 02 20:17:14 crc kubenswrapper[4796]: I1202 20:17:14.535975 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg" Dec 02 20:17:14 crc kubenswrapper[4796]: I1202 20:17:14.539156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" event={"ID":"a3c8abe1-552e-404c-be1f-88f30e467d8f","Type":"ContainerDied","Data":"84d4767c47f2a8dd2d1d3144151f28e31aebd8745f6f06ce5185aa96652ff5df"} Dec 02 20:17:14 crc kubenswrapper[4796]: I1202 20:17:14.539211 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zjrkr" Dec 02 20:17:14 crc kubenswrapper[4796]: I1202 20:17:14.567110 4796 scope.go:117] "RemoveContainer" containerID="9aeb65988071ca8766b793519ea21349d480a71b9630327503905cda58366127" Dec 02 20:17:14 crc kubenswrapper[4796]: I1202 20:17:14.595242 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zjrkr"] Dec 02 20:17:14 crc kubenswrapper[4796]: I1202 20:17:14.601991 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zjrkr"] Dec 02 20:17:14 crc kubenswrapper[4796]: I1202 20:17:14.610600 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg"] Dec 02 20:17:14 crc kubenswrapper[4796]: I1202 20:17:14.620923 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lsdcg"] Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.274672 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c8abe1-552e-404c-be1f-88f30e467d8f" path="/var/lib/kubelet/pods/a3c8abe1-552e-404c-be1f-88f30e467d8f/volumes" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.276141 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbd52c61-04b7-424a-88c0-71653fd8d65e" path="/var/lib/kubelet/pods/bbd52c61-04b7-424a-88c0-71653fd8d65e/volumes" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.425860 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54467446f7-svld8"] Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.426502 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a52028-b443-4287-80e1-dfcffb2ba07e" containerName="registry-server" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.426622 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a52028-b443-4287-80e1-dfcffb2ba07e" containerName="registry-server" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.426735 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f262dee-3028-4aa8-8ab3-8e4777368da0" containerName="extract-content" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.426846 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f262dee-3028-4aa8-8ab3-8e4777368da0" containerName="extract-content" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.426960 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a52028-b443-4287-80e1-dfcffb2ba07e" containerName="extract-utilities" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.427062 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a52028-b443-4287-80e1-dfcffb2ba07e" containerName="extract-utilities" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.427193 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f262dee-3028-4aa8-8ab3-8e4777368da0" containerName="registry-server" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.427319 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f262dee-3028-4aa8-8ab3-8e4777368da0" containerName="registry-server" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.427447 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" containerName="extract-content" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.427550 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" containerName="extract-content" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.427672 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" containerName="registry-server" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.427787 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" containerName="registry-server" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.427896 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f262dee-3028-4aa8-8ab3-8e4777368da0" containerName="extract-utilities" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.428006 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f262dee-3028-4aa8-8ab3-8e4777368da0" containerName="extract-utilities" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.428107 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a51e701-99f5-423c-a413-464a283751f4" containerName="marketplace-operator" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.428209 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a51e701-99f5-423c-a413-464a283751f4" containerName="marketplace-operator" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.428338 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" containerName="extract-utilities" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.428447 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" containerName="extract-utilities" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.428567 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a52028-b443-4287-80e1-dfcffb2ba07e" containerName="extract-content" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.428670 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a52028-b443-4287-80e1-dfcffb2ba07e" containerName="extract-content" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.428766 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" containerName="registry-server" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.428860 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" containerName="registry-server" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.428947 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd52c61-04b7-424a-88c0-71653fd8d65e" containerName="route-controller-manager" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.429053 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd52c61-04b7-424a-88c0-71653fd8d65e" containerName="route-controller-manager" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.429144 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c8abe1-552e-404c-be1f-88f30e467d8f" containerName="controller-manager" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.429221 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c8abe1-552e-404c-be1f-88f30e467d8f" containerName="controller-manager" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.429348 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" containerName="extract-content" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.429428 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" containerName="extract-content" Dec 02 20:17:15 crc kubenswrapper[4796]: E1202 20:17:15.429504 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" containerName="extract-utilities" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.429585 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" containerName="extract-utilities" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.429804 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd52c61-04b7-424a-88c0-71653fd8d65e" containerName="route-controller-manager" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.429910 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a51e701-99f5-423c-a413-464a283751f4" containerName="marketplace-operator" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.429992 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a52028-b443-4287-80e1-dfcffb2ba07e" containerName="registry-server" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.430074 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c8abe1-552e-404c-be1f-88f30e467d8f" containerName="controller-manager" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.430154 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f262dee-3028-4aa8-8ab3-8e4777368da0" containerName="registry-server" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.430244 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffd16a4b-7b78-4954-8ea4-317fdfcedb55" containerName="registry-server" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.430359 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5411f34e-b1b7-4e1a-9948-49eb9b59a5d8" containerName="registry-server" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.430910 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.434601 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.436304 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.436564 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-277lt"] Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.437360 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.437447 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.437754 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.439478 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.439914 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.441091 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.441841 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.443051 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.443514 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.443941 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.444337 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.451888 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-277lt"] Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.452195 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.514986 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54467446f7-svld8"] Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.550755 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2flb\" (UniqueName: \"kubernetes.io/projected/3e8ac767-532a-4c91-92ac-b2e145ece897-kube-api-access-n2flb\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.550824 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/187a6cf6-849a-42be-a4a3-b248bd6fe654-client-ca\") pod \"route-controller-manager-f8c564845-277lt\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.550918 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvc59\" (UniqueName: \"kubernetes.io/projected/187a6cf6-849a-42be-a4a3-b248bd6fe654-kube-api-access-nvc59\") pod \"route-controller-manager-f8c564845-277lt\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.551001 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8ac767-532a-4c91-92ac-b2e145ece897-serving-cert\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.551119 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187a6cf6-849a-42be-a4a3-b248bd6fe654-config\") pod \"route-controller-manager-f8c564845-277lt\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.551179 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-proxy-ca-bundles\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.551290 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/187a6cf6-849a-42be-a4a3-b248bd6fe654-serving-cert\") pod \"route-controller-manager-f8c564845-277lt\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.551388 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-client-ca\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.551453 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-config\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.652902 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2flb\" (UniqueName: \"kubernetes.io/projected/3e8ac767-532a-4c91-92ac-b2e145ece897-kube-api-access-n2flb\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.652973 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/187a6cf6-849a-42be-a4a3-b248bd6fe654-client-ca\") pod \"route-controller-manager-f8c564845-277lt\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.653040 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvc59\" (UniqueName: \"kubernetes.io/projected/187a6cf6-849a-42be-a4a3-b248bd6fe654-kube-api-access-nvc59\") pod \"route-controller-manager-f8c564845-277lt\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.653111 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8ac767-532a-4c91-92ac-b2e145ece897-serving-cert\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.653186 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187a6cf6-849a-42be-a4a3-b248bd6fe654-config\") pod \"route-controller-manager-f8c564845-277lt\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.653237 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-proxy-ca-bundles\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.653333 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/187a6cf6-849a-42be-a4a3-b248bd6fe654-serving-cert\") pod \"route-controller-manager-f8c564845-277lt\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.653411 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-client-ca\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.653476 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-config\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.655500 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-client-ca\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.655514 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-proxy-ca-bundles\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.656325 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187a6cf6-849a-42be-a4a3-b248bd6fe654-config\") pod \"route-controller-manager-f8c564845-277lt\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.656338 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-config\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.656805 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/187a6cf6-849a-42be-a4a3-b248bd6fe654-client-ca\") pod \"route-controller-manager-f8c564845-277lt\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.659998 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/187a6cf6-849a-42be-a4a3-b248bd6fe654-serving-cert\") pod \"route-controller-manager-f8c564845-277lt\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.662181 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8ac767-532a-4c91-92ac-b2e145ece897-serving-cert\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.688429 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvc59\" (UniqueName: \"kubernetes.io/projected/187a6cf6-849a-42be-a4a3-b248bd6fe654-kube-api-access-nvc59\") pod \"route-controller-manager-f8c564845-277lt\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.693316 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2flb\" (UniqueName: \"kubernetes.io/projected/3e8ac767-532a-4c91-92ac-b2e145ece897-kube-api-access-n2flb\") pod \"controller-manager-54467446f7-svld8\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.771036 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:15 crc kubenswrapper[4796]: I1202 20:17:15.782173 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:16 crc kubenswrapper[4796]: I1202 20:17:16.027949 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-277lt"] Dec 02 20:17:16 crc kubenswrapper[4796]: W1202 20:17:16.040091 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod187a6cf6_849a_42be_a4a3_b248bd6fe654.slice/crio-ae67f8fd145ccdc448d2c021a51637c10b6fa86080e99543f620efdc2bc0c180 WatchSource:0}: Error finding container ae67f8fd145ccdc448d2c021a51637c10b6fa86080e99543f620efdc2bc0c180: Status 404 returned error can't find the container with id ae67f8fd145ccdc448d2c021a51637c10b6fa86080e99543f620efdc2bc0c180 Dec 02 20:17:16 crc kubenswrapper[4796]: I1202 20:17:16.068920 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54467446f7-svld8"] Dec 02 20:17:16 crc kubenswrapper[4796]: W1202 20:17:16.075063 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e8ac767_532a_4c91_92ac_b2e145ece897.slice/crio-6cd131b0d1c581cb9123e282ed05342ba5362ed7e988ec045376879ef2ac5b25 WatchSource:0}: Error finding container 6cd131b0d1c581cb9123e282ed05342ba5362ed7e988ec045376879ef2ac5b25: Status 404 returned error can't find the container with id 6cd131b0d1c581cb9123e282ed05342ba5362ed7e988ec045376879ef2ac5b25 Dec 02 20:17:16 crc kubenswrapper[4796]: I1202 20:17:16.556705 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" event={"ID":"187a6cf6-849a-42be-a4a3-b248bd6fe654","Type":"ContainerStarted","Data":"17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588"} Dec 02 20:17:16 crc kubenswrapper[4796]: I1202 20:17:16.556748 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" event={"ID":"187a6cf6-849a-42be-a4a3-b248bd6fe654","Type":"ContainerStarted","Data":"ae67f8fd145ccdc448d2c021a51637c10b6fa86080e99543f620efdc2bc0c180"} Dec 02 20:17:16 crc kubenswrapper[4796]: I1202 20:17:16.557087 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:16 crc kubenswrapper[4796]: I1202 20:17:16.558643 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54467446f7-svld8" event={"ID":"3e8ac767-532a-4c91-92ac-b2e145ece897","Type":"ContainerStarted","Data":"2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d"} Dec 02 20:17:16 crc kubenswrapper[4796]: I1202 20:17:16.558694 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54467446f7-svld8" event={"ID":"3e8ac767-532a-4c91-92ac-b2e145ece897","Type":"ContainerStarted","Data":"6cd131b0d1c581cb9123e282ed05342ba5362ed7e988ec045376879ef2ac5b25"} Dec 02 20:17:16 crc kubenswrapper[4796]: I1202 20:17:16.559161 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:16 crc kubenswrapper[4796]: I1202 20:17:16.564332 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:16 crc kubenswrapper[4796]: I1202 20:17:16.570718 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:17:16 crc kubenswrapper[4796]: I1202 20:17:16.586359 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" podStartSLOduration=3.58633686 podStartE2EDuration="3.58633686s" podCreationTimestamp="2025-12-02 20:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:17:16.582398961 +0000 UTC m=+319.585774505" watchObservedRunningTime="2025-12-02 20:17:16.58633686 +0000 UTC m=+319.589712404" Dec 02 20:17:16 crc kubenswrapper[4796]: I1202 20:17:16.636695 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54467446f7-svld8" podStartSLOduration=3.636676677 podStartE2EDuration="3.636676677s" podCreationTimestamp="2025-12-02 20:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:17:16.603187031 +0000 UTC m=+319.606562575" watchObservedRunningTime="2025-12-02 20:17:16.636676677 +0000 UTC m=+319.640052201" Dec 02 20:17:20 crc kubenswrapper[4796]: E1202 20:17:20.856942 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bbfb44b7f4602e45549559a42a8d3748ed8972b7bd0f560e6bdb30c1558e9198\": RecentStats: unable to find data in memory cache]" Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.600122 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p75xc"] Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.602525 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.605964 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.627657 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p75xc"] Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.658889 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvn7c\" (UniqueName: \"kubernetes.io/projected/f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd-kube-api-access-kvn7c\") pod \"redhat-operators-p75xc\" (UID: \"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd\") " pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.658975 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd-catalog-content\") pod \"redhat-operators-p75xc\" (UID: \"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd\") " pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.659022 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd-utilities\") pod \"redhat-operators-p75xc\" (UID: \"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd\") " pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.761435 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvn7c\" (UniqueName: \"kubernetes.io/projected/f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd-kube-api-access-kvn7c\") pod \"redhat-operators-p75xc\" (UID: \"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd\") " pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.761622 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd-catalog-content\") pod \"redhat-operators-p75xc\" (UID: \"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd\") " pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.761834 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd-utilities\") pod \"redhat-operators-p75xc\" (UID: \"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd\") " pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.762626 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd-catalog-content\") pod \"redhat-operators-p75xc\" (UID: \"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd\") " pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.763767 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd-utilities\") pod \"redhat-operators-p75xc\" (UID: \"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd\") " pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.786123 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvn7c\" (UniqueName: \"kubernetes.io/projected/f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd-kube-api-access-kvn7c\") pod \"redhat-operators-p75xc\" (UID: \"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd\") " pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:22 crc kubenswrapper[4796]: I1202 20:17:22.924379 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.193162 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p5k9r"] Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.195387 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.198578 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.202429 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5k9r"] Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.275960 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knxjq\" (UniqueName: \"kubernetes.io/projected/7cb034af-9ad4-463d-8017-9df8e62d0a24-kube-api-access-knxjq\") pod \"redhat-marketplace-p5k9r\" (UID: \"7cb034af-9ad4-463d-8017-9df8e62d0a24\") " pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.276030 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb034af-9ad4-463d-8017-9df8e62d0a24-utilities\") pod \"redhat-marketplace-p5k9r\" (UID: \"7cb034af-9ad4-463d-8017-9df8e62d0a24\") " pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.276155 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb034af-9ad4-463d-8017-9df8e62d0a24-catalog-content\") pod \"redhat-marketplace-p5k9r\" (UID: \"7cb034af-9ad4-463d-8017-9df8e62d0a24\") " pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.378114 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knxjq\" (UniqueName: \"kubernetes.io/projected/7cb034af-9ad4-463d-8017-9df8e62d0a24-kube-api-access-knxjq\") pod \"redhat-marketplace-p5k9r\" (UID: \"7cb034af-9ad4-463d-8017-9df8e62d0a24\") " pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.378244 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb034af-9ad4-463d-8017-9df8e62d0a24-catalog-content\") pod \"redhat-marketplace-p5k9r\" (UID: \"7cb034af-9ad4-463d-8017-9df8e62d0a24\") " pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.378341 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb034af-9ad4-463d-8017-9df8e62d0a24-utilities\") pod \"redhat-marketplace-p5k9r\" (UID: \"7cb034af-9ad4-463d-8017-9df8e62d0a24\") " pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.379501 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb034af-9ad4-463d-8017-9df8e62d0a24-utilities\") pod \"redhat-marketplace-p5k9r\" (UID: \"7cb034af-9ad4-463d-8017-9df8e62d0a24\") " pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.379787 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb034af-9ad4-463d-8017-9df8e62d0a24-catalog-content\") pod \"redhat-marketplace-p5k9r\" (UID: \"7cb034af-9ad4-463d-8017-9df8e62d0a24\") " pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.413390 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knxjq\" (UniqueName: \"kubernetes.io/projected/7cb034af-9ad4-463d-8017-9df8e62d0a24-kube-api-access-knxjq\") pod \"redhat-marketplace-p5k9r\" (UID: \"7cb034af-9ad4-463d-8017-9df8e62d0a24\") " pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.420103 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p75xc"] Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.512463 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.610105 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p75xc" event={"ID":"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd","Type":"ContainerStarted","Data":"1f5c9a2a5f6288abcfc9c38a3f7a15f78222243ecd00bd015766e8d5313dc0d6"} Dec 02 20:17:23 crc kubenswrapper[4796]: I1202 20:17:23.988839 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5k9r"] Dec 02 20:17:23 crc kubenswrapper[4796]: W1202 20:17:23.998443 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb034af_9ad4_463d_8017_9df8e62d0a24.slice/crio-139376a92d90cd1f9f395434ccc297bed3f7ba384d7e15fe409d96cfcdfeec2f WatchSource:0}: Error finding container 139376a92d90cd1f9f395434ccc297bed3f7ba384d7e15fe409d96cfcdfeec2f: Status 404 returned error can't find the container with id 139376a92d90cd1f9f395434ccc297bed3f7ba384d7e15fe409d96cfcdfeec2f Dec 02 20:17:24 crc kubenswrapper[4796]: I1202 20:17:24.619188 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd" containerID="ba10847a252163c0d3f47efccc4b77e1195130f47b3147f5e65215a2acff3cb0" exitCode=0 Dec 02 20:17:24 crc kubenswrapper[4796]: I1202 20:17:24.619325 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p75xc" event={"ID":"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd","Type":"ContainerDied","Data":"ba10847a252163c0d3f47efccc4b77e1195130f47b3147f5e65215a2acff3cb0"} Dec 02 20:17:24 crc kubenswrapper[4796]: I1202 20:17:24.623594 4796 generic.go:334] "Generic (PLEG): container finished" podID="7cb034af-9ad4-463d-8017-9df8e62d0a24" containerID="9e1a6e2d0526a9982f81cc713d0f43a426d6cd388de674550336be4930e80052" exitCode=0 Dec 02 20:17:24 crc kubenswrapper[4796]: I1202 20:17:24.623645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5k9r" event={"ID":"7cb034af-9ad4-463d-8017-9df8e62d0a24","Type":"ContainerDied","Data":"9e1a6e2d0526a9982f81cc713d0f43a426d6cd388de674550336be4930e80052"} Dec 02 20:17:24 crc kubenswrapper[4796]: I1202 20:17:24.623691 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5k9r" event={"ID":"7cb034af-9ad4-463d-8017-9df8e62d0a24","Type":"ContainerStarted","Data":"139376a92d90cd1f9f395434ccc297bed3f7ba384d7e15fe409d96cfcdfeec2f"} Dec 02 20:17:24 crc kubenswrapper[4796]: I1202 20:17:24.980150 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tlm4b"] Dec 02 20:17:24 crc kubenswrapper[4796]: I1202 20:17:24.981692 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:24 crc kubenswrapper[4796]: I1202 20:17:24.984409 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 20:17:24 crc kubenswrapper[4796]: I1202 20:17:24.993907 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlm4b"] Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.123759 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6ec00d-4ac5-4442-a915-26d7109a0eac-catalog-content\") pod \"community-operators-tlm4b\" (UID: \"6f6ec00d-4ac5-4442-a915-26d7109a0eac\") " pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.123840 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6ec00d-4ac5-4442-a915-26d7109a0eac-utilities\") pod \"community-operators-tlm4b\" (UID: \"6f6ec00d-4ac5-4442-a915-26d7109a0eac\") " pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.124026 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdbb\" (UniqueName: \"kubernetes.io/projected/6f6ec00d-4ac5-4442-a915-26d7109a0eac-kube-api-access-htdbb\") pod \"community-operators-tlm4b\" (UID: \"6f6ec00d-4ac5-4442-a915-26d7109a0eac\") " pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.226081 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htdbb\" (UniqueName: \"kubernetes.io/projected/6f6ec00d-4ac5-4442-a915-26d7109a0eac-kube-api-access-htdbb\") pod \"community-operators-tlm4b\" (UID: \"6f6ec00d-4ac5-4442-a915-26d7109a0eac\") " pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.226192 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6ec00d-4ac5-4442-a915-26d7109a0eac-catalog-content\") pod \"community-operators-tlm4b\" (UID: \"6f6ec00d-4ac5-4442-a915-26d7109a0eac\") " pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.226243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6ec00d-4ac5-4442-a915-26d7109a0eac-utilities\") pod \"community-operators-tlm4b\" (UID: \"6f6ec00d-4ac5-4442-a915-26d7109a0eac\") " pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.227212 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6ec00d-4ac5-4442-a915-26d7109a0eac-catalog-content\") pod \"community-operators-tlm4b\" (UID: \"6f6ec00d-4ac5-4442-a915-26d7109a0eac\") " pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.229476 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6ec00d-4ac5-4442-a915-26d7109a0eac-utilities\") pod \"community-operators-tlm4b\" (UID: \"6f6ec00d-4ac5-4442-a915-26d7109a0eac\") " pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.251952 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htdbb\" (UniqueName: \"kubernetes.io/projected/6f6ec00d-4ac5-4442-a915-26d7109a0eac-kube-api-access-htdbb\") pod \"community-operators-tlm4b\" (UID: \"6f6ec00d-4ac5-4442-a915-26d7109a0eac\") " pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.304927 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.577361 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-62m7r"] Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.579381 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.582746 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.587101 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62m7r"] Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.631661 4796 generic.go:334] "Generic (PLEG): container finished" podID="7cb034af-9ad4-463d-8017-9df8e62d0a24" containerID="c51d60bd23978f4028a0a58e2e06f243acc4900914acc29f29c2b742a6e5425e" exitCode=0 Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.632941 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5k9r" event={"ID":"7cb034af-9ad4-463d-8017-9df8e62d0a24","Type":"ContainerDied","Data":"c51d60bd23978f4028a0a58e2e06f243acc4900914acc29f29c2b742a6e5425e"} Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.739446 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmrr5\" (UniqueName: \"kubernetes.io/projected/888e8184-64ca-4f8a-8a7e-ccf06695e6ec-kube-api-access-wmrr5\") pod \"certified-operators-62m7r\" (UID: \"888e8184-64ca-4f8a-8a7e-ccf06695e6ec\") " pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.739594 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888e8184-64ca-4f8a-8a7e-ccf06695e6ec-catalog-content\") pod \"certified-operators-62m7r\" (UID: \"888e8184-64ca-4f8a-8a7e-ccf06695e6ec\") " pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.739670 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888e8184-64ca-4f8a-8a7e-ccf06695e6ec-utilities\") pod \"certified-operators-62m7r\" (UID: \"888e8184-64ca-4f8a-8a7e-ccf06695e6ec\") " pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.765531 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlm4b"] Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.841313 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888e8184-64ca-4f8a-8a7e-ccf06695e6ec-catalog-content\") pod \"certified-operators-62m7r\" (UID: \"888e8184-64ca-4f8a-8a7e-ccf06695e6ec\") " pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.841392 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888e8184-64ca-4f8a-8a7e-ccf06695e6ec-utilities\") pod \"certified-operators-62m7r\" (UID: \"888e8184-64ca-4f8a-8a7e-ccf06695e6ec\") " pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.841435 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmrr5\" (UniqueName: \"kubernetes.io/projected/888e8184-64ca-4f8a-8a7e-ccf06695e6ec-kube-api-access-wmrr5\") pod \"certified-operators-62m7r\" (UID: \"888e8184-64ca-4f8a-8a7e-ccf06695e6ec\") " pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.842266 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888e8184-64ca-4f8a-8a7e-ccf06695e6ec-catalog-content\") pod \"certified-operators-62m7r\" (UID: \"888e8184-64ca-4f8a-8a7e-ccf06695e6ec\") " pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.842324 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888e8184-64ca-4f8a-8a7e-ccf06695e6ec-utilities\") pod \"certified-operators-62m7r\" (UID: \"888e8184-64ca-4f8a-8a7e-ccf06695e6ec\") " pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.876124 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmrr5\" (UniqueName: \"kubernetes.io/projected/888e8184-64ca-4f8a-8a7e-ccf06695e6ec-kube-api-access-wmrr5\") pod \"certified-operators-62m7r\" (UID: \"888e8184-64ca-4f8a-8a7e-ccf06695e6ec\") " pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:25 crc kubenswrapper[4796]: I1202 20:17:25.992674 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:26 crc kubenswrapper[4796]: I1202 20:17:26.292220 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62m7r"] Dec 02 20:17:26 crc kubenswrapper[4796]: I1202 20:17:26.644596 4796 generic.go:334] "Generic (PLEG): container finished" podID="888e8184-64ca-4f8a-8a7e-ccf06695e6ec" containerID="fa73804812502dfc9cef879dbb4bcb558ebced9a79b679c6a524e0ebe3f78c2c" exitCode=0 Dec 02 20:17:26 crc kubenswrapper[4796]: I1202 20:17:26.644711 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62m7r" event={"ID":"888e8184-64ca-4f8a-8a7e-ccf06695e6ec","Type":"ContainerDied","Data":"fa73804812502dfc9cef879dbb4bcb558ebced9a79b679c6a524e0ebe3f78c2c"} Dec 02 20:17:26 crc kubenswrapper[4796]: I1202 20:17:26.645106 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62m7r" event={"ID":"888e8184-64ca-4f8a-8a7e-ccf06695e6ec","Type":"ContainerStarted","Data":"e412fc0c82e44d7e4dde92ae7be3e564576857b3d483dfbc11df02105da39214"} Dec 02 20:17:26 crc kubenswrapper[4796]: I1202 20:17:26.649947 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5k9r" event={"ID":"7cb034af-9ad4-463d-8017-9df8e62d0a24","Type":"ContainerStarted","Data":"294e9b37946b05f3e1f07cce90bd65f76029a721be8f332b98a5793c5fd33011"} Dec 02 20:17:26 crc kubenswrapper[4796]: I1202 20:17:26.651387 4796 generic.go:334] "Generic (PLEG): container finished" podID="6f6ec00d-4ac5-4442-a915-26d7109a0eac" containerID="a06c36baf46d790fa71b1d8919088519f598adcda113cbc29fdaffb6c9ade924" exitCode=0 Dec 02 20:17:26 crc kubenswrapper[4796]: I1202 20:17:26.651504 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlm4b" event={"ID":"6f6ec00d-4ac5-4442-a915-26d7109a0eac","Type":"ContainerDied","Data":"a06c36baf46d790fa71b1d8919088519f598adcda113cbc29fdaffb6c9ade924"} Dec 02 20:17:26 crc kubenswrapper[4796]: I1202 20:17:26.651553 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlm4b" event={"ID":"6f6ec00d-4ac5-4442-a915-26d7109a0eac","Type":"ContainerStarted","Data":"574a436b916f0ce890288b866096557798c6a6aaeed76ae23ec881225cbcfcc6"} Dec 02 20:17:26 crc kubenswrapper[4796]: I1202 20:17:26.654985 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd" containerID="c1de1da3c25d389336bec65d2ae2ebc58a93ac6d87da2aaedaee0a00259a727e" exitCode=0 Dec 02 20:17:26 crc kubenswrapper[4796]: I1202 20:17:26.655029 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p75xc" event={"ID":"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd","Type":"ContainerDied","Data":"c1de1da3c25d389336bec65d2ae2ebc58a93ac6d87da2aaedaee0a00259a727e"} Dec 02 20:17:26 crc kubenswrapper[4796]: I1202 20:17:26.699421 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p5k9r" podStartSLOduration=2.171044761 podStartE2EDuration="3.69939245s" podCreationTimestamp="2025-12-02 20:17:23 +0000 UTC" firstStartedPulling="2025-12-02 20:17:24.625302357 +0000 UTC m=+327.628677911" lastFinishedPulling="2025-12-02 20:17:26.153650026 +0000 UTC m=+329.157025600" observedRunningTime="2025-12-02 20:17:26.696466567 +0000 UTC m=+329.699842141" watchObservedRunningTime="2025-12-02 20:17:26.69939245 +0000 UTC m=+329.702768004" Dec 02 20:17:27 crc kubenswrapper[4796]: I1202 20:17:27.662263 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62m7r" event={"ID":"888e8184-64ca-4f8a-8a7e-ccf06695e6ec","Type":"ContainerStarted","Data":"ec13071af2e4895621c5af949cfee31f1d490658a8851b4e8869a9e67b07a400"} Dec 02 20:17:27 crc kubenswrapper[4796]: I1202 20:17:27.664272 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p75xc" event={"ID":"f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd","Type":"ContainerStarted","Data":"ea899001138b5d952e99d8ce4eeb9b0a4305a0a9475012033fce0520fa69d7a4"} Dec 02 20:17:27 crc kubenswrapper[4796]: I1202 20:17:27.726062 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p75xc" podStartSLOduration=3.270263511 podStartE2EDuration="5.726035423s" podCreationTimestamp="2025-12-02 20:17:22 +0000 UTC" firstStartedPulling="2025-12-02 20:17:24.621528974 +0000 UTC m=+327.624904548" lastFinishedPulling="2025-12-02 20:17:27.077300916 +0000 UTC m=+330.080676460" observedRunningTime="2025-12-02 20:17:27.725009967 +0000 UTC m=+330.728385501" watchObservedRunningTime="2025-12-02 20:17:27.726035423 +0000 UTC m=+330.729410957" Dec 02 20:17:28 crc kubenswrapper[4796]: I1202 20:17:28.799826 4796 generic.go:334] "Generic (PLEG): container finished" podID="888e8184-64ca-4f8a-8a7e-ccf06695e6ec" containerID="ec13071af2e4895621c5af949cfee31f1d490658a8851b4e8869a9e67b07a400" exitCode=0 Dec 02 20:17:28 crc kubenswrapper[4796]: I1202 20:17:28.800869 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62m7r" event={"ID":"888e8184-64ca-4f8a-8a7e-ccf06695e6ec","Type":"ContainerDied","Data":"ec13071af2e4895621c5af949cfee31f1d490658a8851b4e8869a9e67b07a400"} Dec 02 20:17:29 crc kubenswrapper[4796]: I1202 20:17:29.811526 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62m7r" event={"ID":"888e8184-64ca-4f8a-8a7e-ccf06695e6ec","Type":"ContainerStarted","Data":"c7d50d8935287b487de2cb51a75adb77ac03b51d47b2bbf5e983ba3b127e8564"} Dec 02 20:17:29 crc kubenswrapper[4796]: I1202 20:17:29.816857 4796 generic.go:334] "Generic (PLEG): container finished" podID="6f6ec00d-4ac5-4442-a915-26d7109a0eac" containerID="207bf92e9669a31830faf6514b068805277d9ca47761c7a489fa597eb89d9086" exitCode=0 Dec 02 20:17:29 crc kubenswrapper[4796]: I1202 20:17:29.816968 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlm4b" event={"ID":"6f6ec00d-4ac5-4442-a915-26d7109a0eac","Type":"ContainerDied","Data":"207bf92e9669a31830faf6514b068805277d9ca47761c7a489fa597eb89d9086"} Dec 02 20:17:29 crc kubenswrapper[4796]: I1202 20:17:29.844614 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-62m7r" podStartSLOduration=2.202756991 podStartE2EDuration="4.844595146s" podCreationTimestamp="2025-12-02 20:17:25 +0000 UTC" firstStartedPulling="2025-12-02 20:17:26.646932944 +0000 UTC m=+329.650308478" lastFinishedPulling="2025-12-02 20:17:29.288771099 +0000 UTC m=+332.292146633" observedRunningTime="2025-12-02 20:17:29.840683698 +0000 UTC m=+332.844059272" watchObservedRunningTime="2025-12-02 20:17:29.844595146 +0000 UTC m=+332.847970690" Dec 02 20:17:30 crc kubenswrapper[4796]: I1202 20:17:30.832882 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlm4b" event={"ID":"6f6ec00d-4ac5-4442-a915-26d7109a0eac","Type":"ContainerStarted","Data":"0f3ed869a4fdd58874efccd87a90ef899d7e4c1ff87af572a3be3a40cf1defeb"} Dec 02 20:17:30 crc kubenswrapper[4796]: I1202 20:17:30.859811 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tlm4b" podStartSLOduration=3.107475073 podStartE2EDuration="6.859778882s" podCreationTimestamp="2025-12-02 20:17:24 +0000 UTC" firstStartedPulling="2025-12-02 20:17:26.653042918 +0000 UTC m=+329.656418492" lastFinishedPulling="2025-12-02 20:17:30.405346757 +0000 UTC m=+333.408722301" observedRunningTime="2025-12-02 20:17:30.856439409 +0000 UTC m=+333.859814963" watchObservedRunningTime="2025-12-02 20:17:30.859778882 +0000 UTC m=+333.863154416" Dec 02 20:17:30 crc kubenswrapper[4796]: E1202 20:17:30.999435 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bbfb44b7f4602e45549559a42a8d3748ed8972b7bd0f560e6bdb30c1558e9198\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice\": RecentStats: unable to find data in memory cache]" Dec 02 20:17:32 crc kubenswrapper[4796]: I1202 20:17:32.925035 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:32 crc kubenswrapper[4796]: I1202 20:17:32.925108 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:33 crc kubenswrapper[4796]: I1202 20:17:33.512953 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:33 crc kubenswrapper[4796]: I1202 20:17:33.513503 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:33 crc kubenswrapper[4796]: I1202 20:17:33.551378 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:33 crc kubenswrapper[4796]: I1202 20:17:33.910489 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p5k9r" Dec 02 20:17:34 crc kubenswrapper[4796]: I1202 20:17:34.004487 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p75xc" podUID="f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd" containerName="registry-server" probeResult="failure" output=< Dec 02 20:17:34 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 02 20:17:34 crc kubenswrapper[4796]: > Dec 02 20:17:35 crc kubenswrapper[4796]: I1202 20:17:35.305407 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:35 crc kubenswrapper[4796]: I1202 20:17:35.305773 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:35 crc kubenswrapper[4796]: I1202 20:17:35.362647 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:35 crc kubenswrapper[4796]: I1202 20:17:35.921516 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tlm4b" Dec 02 20:17:35 crc kubenswrapper[4796]: I1202 20:17:35.993949 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:35 crc kubenswrapper[4796]: I1202 20:17:35.994867 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:36 crc kubenswrapper[4796]: I1202 20:17:36.048889 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:36 crc kubenswrapper[4796]: I1202 20:17:36.942356 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-62m7r" Dec 02 20:17:41 crc kubenswrapper[4796]: E1202 20:17:41.210193 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bbfb44b7f4602e45549559a42a8d3748ed8972b7bd0f560e6bdb30c1558e9198\": RecentStats: unable to find data in memory cache]" Dec 02 20:17:42 crc kubenswrapper[4796]: I1202 20:17:42.973874 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:43 crc kubenswrapper[4796]: I1202 20:17:43.026447 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p75xc" Dec 02 20:17:51 crc kubenswrapper[4796]: E1202 20:17:51.382465 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bbfb44b7f4602e45549559a42a8d3748ed8972b7bd0f560e6bdb30c1558e9198\": RecentStats: unable to find data in memory cache]" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.209273 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54467446f7-svld8"] Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.210016 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54467446f7-svld8" podUID="3e8ac767-532a-4c91-92ac-b2e145ece897" containerName="controller-manager" containerID="cri-o://2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d" gracePeriod=30 Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.617804 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.818644 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8ac767-532a-4c91-92ac-b2e145ece897-serving-cert\") pod \"3e8ac767-532a-4c91-92ac-b2e145ece897\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.818853 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2flb\" (UniqueName: \"kubernetes.io/projected/3e8ac767-532a-4c91-92ac-b2e145ece897-kube-api-access-n2flb\") pod \"3e8ac767-532a-4c91-92ac-b2e145ece897\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.818876 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-config\") pod \"3e8ac767-532a-4c91-92ac-b2e145ece897\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.818902 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-client-ca\") pod \"3e8ac767-532a-4c91-92ac-b2e145ece897\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.819852 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-client-ca" (OuterVolumeSpecName: "client-ca") pod "3e8ac767-532a-4c91-92ac-b2e145ece897" (UID: "3e8ac767-532a-4c91-92ac-b2e145ece897"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.819912 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-config" (OuterVolumeSpecName: "config") pod "3e8ac767-532a-4c91-92ac-b2e145ece897" (UID: "3e8ac767-532a-4c91-92ac-b2e145ece897"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.819930 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-proxy-ca-bundles\") pod \"3e8ac767-532a-4c91-92ac-b2e145ece897\" (UID: \"3e8ac767-532a-4c91-92ac-b2e145ece897\") " Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.820898 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3e8ac767-532a-4c91-92ac-b2e145ece897" (UID: "3e8ac767-532a-4c91-92ac-b2e145ece897"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.820984 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.820999 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.821009 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e8ac767-532a-4c91-92ac-b2e145ece897-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.826031 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e8ac767-532a-4c91-92ac-b2e145ece897-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3e8ac767-532a-4c91-92ac-b2e145ece897" (UID: "3e8ac767-532a-4c91-92ac-b2e145ece897"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.827144 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8ac767-532a-4c91-92ac-b2e145ece897-kube-api-access-n2flb" (OuterVolumeSpecName: "kube-api-access-n2flb") pod "3e8ac767-532a-4c91-92ac-b2e145ece897" (UID: "3e8ac767-532a-4c91-92ac-b2e145ece897"). InnerVolumeSpecName "kube-api-access-n2flb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.922203 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8ac767-532a-4c91-92ac-b2e145ece897-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.922310 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2flb\" (UniqueName: \"kubernetes.io/projected/3e8ac767-532a-4c91-92ac-b2e145ece897-kube-api-access-n2flb\") on node \"crc\" DevicePath \"\"" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.998555 4796 generic.go:334] "Generic (PLEG): container finished" podID="3e8ac767-532a-4c91-92ac-b2e145ece897" containerID="2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d" exitCode=0 Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.998611 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54467446f7-svld8" event={"ID":"3e8ac767-532a-4c91-92ac-b2e145ece897","Type":"ContainerDied","Data":"2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d"} Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.998649 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54467446f7-svld8" event={"ID":"3e8ac767-532a-4c91-92ac-b2e145ece897","Type":"ContainerDied","Data":"6cd131b0d1c581cb9123e282ed05342ba5362ed7e988ec045376879ef2ac5b25"} Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.998672 4796 scope.go:117] "RemoveContainer" containerID="2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d" Dec 02 20:17:53 crc kubenswrapper[4796]: I1202 20:17:53.998837 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54467446f7-svld8" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.038535 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54467446f7-svld8"] Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.038837 4796 scope.go:117] "RemoveContainer" containerID="2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d" Dec 02 20:17:54 crc kubenswrapper[4796]: E1202 20:17:54.041581 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d\": container with ID starting with 2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d not found: ID does not exist" containerID="2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.041668 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d"} err="failed to get container status \"2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d\": rpc error: code = NotFound desc = could not find container \"2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d\": container with ID starting with 2090cd06ab9d6fc920d3508077c47e8be272ed51a0c41c047b1bd00dd3c4222d not found: ID does not exist" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.044383 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54467446f7-svld8"] Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.834324 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74cddcc97-zwp5r"] Dec 02 20:17:54 crc kubenswrapper[4796]: E1202 20:17:54.834736 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8ac767-532a-4c91-92ac-b2e145ece897" containerName="controller-manager" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.834762 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8ac767-532a-4c91-92ac-b2e145ece897" containerName="controller-manager" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.835010 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8ac767-532a-4c91-92ac-b2e145ece897" containerName="controller-manager" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.835808 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.838736 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.839190 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.839466 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.839804 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.840200 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.840438 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.858383 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.867184 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74cddcc97-zwp5r"] Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.938665 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20712aed-61cf-423e-b841-81986bd23891-client-ca\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.939007 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20712aed-61cf-423e-b841-81986bd23891-config\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.939079 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20712aed-61cf-423e-b841-81986bd23891-serving-cert\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.939140 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20712aed-61cf-423e-b841-81986bd23891-proxy-ca-bundles\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:54 crc kubenswrapper[4796]: I1202 20:17:54.939601 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdlck\" (UniqueName: \"kubernetes.io/projected/20712aed-61cf-423e-b841-81986bd23891-kube-api-access-fdlck\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.041837 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20712aed-61cf-423e-b841-81986bd23891-config\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.041977 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20712aed-61cf-423e-b841-81986bd23891-serving-cert\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.042027 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20712aed-61cf-423e-b841-81986bd23891-proxy-ca-bundles\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.042128 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdlck\" (UniqueName: \"kubernetes.io/projected/20712aed-61cf-423e-b841-81986bd23891-kube-api-access-fdlck\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.042206 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20712aed-61cf-423e-b841-81986bd23891-client-ca\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.044206 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20712aed-61cf-423e-b841-81986bd23891-client-ca\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.044959 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20712aed-61cf-423e-b841-81986bd23891-proxy-ca-bundles\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.045368 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20712aed-61cf-423e-b841-81986bd23891-config\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.052591 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20712aed-61cf-423e-b841-81986bd23891-serving-cert\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.062038 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdlck\" (UniqueName: \"kubernetes.io/projected/20712aed-61cf-423e-b841-81986bd23891-kube-api-access-fdlck\") pod \"controller-manager-74cddcc97-zwp5r\" (UID: \"20712aed-61cf-423e-b841-81986bd23891\") " pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.170684 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.190189 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.190335 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.275681 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e8ac767-532a-4c91-92ac-b2e145ece897" path="/var/lib/kubelet/pods/3e8ac767-532a-4c91-92ac-b2e145ece897/volumes" Dec 02 20:17:55 crc kubenswrapper[4796]: I1202 20:17:55.663787 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74cddcc97-zwp5r"] Dec 02 20:17:56 crc kubenswrapper[4796]: I1202 20:17:56.030750 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" event={"ID":"20712aed-61cf-423e-b841-81986bd23891","Type":"ContainerStarted","Data":"5b62298255fafd6c591fe276bc2e31bdd2dd5d6e116d4e8a6b57e84ed4ef4e2d"} Dec 02 20:17:56 crc kubenswrapper[4796]: I1202 20:17:56.030809 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" event={"ID":"20712aed-61cf-423e-b841-81986bd23891","Type":"ContainerStarted","Data":"0f89e9c64be802948ad7be1d6d67a2e7fdb874b493d51afbf7c1b35b4bed27cc"} Dec 02 20:17:56 crc kubenswrapper[4796]: I1202 20:17:56.031351 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:56 crc kubenswrapper[4796]: I1202 20:17:56.042564 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" Dec 02 20:17:56 crc kubenswrapper[4796]: I1202 20:17:56.055738 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74cddcc97-zwp5r" podStartSLOduration=3.055719185 podStartE2EDuration="3.055719185s" podCreationTimestamp="2025-12-02 20:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:17:56.049397277 +0000 UTC m=+359.052772821" watchObservedRunningTime="2025-12-02 20:17:56.055719185 +0000 UTC m=+359.059094729" Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.189510 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-277lt"] Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.191223 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" podUID="187a6cf6-849a-42be-a4a3-b248bd6fe654" containerName="route-controller-manager" containerID="cri-o://17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588" gracePeriod=30 Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.696759 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.745844 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/187a6cf6-849a-42be-a4a3-b248bd6fe654-serving-cert\") pod \"187a6cf6-849a-42be-a4a3-b248bd6fe654\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.746004 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187a6cf6-849a-42be-a4a3-b248bd6fe654-config\") pod \"187a6cf6-849a-42be-a4a3-b248bd6fe654\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.746101 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvc59\" (UniqueName: \"kubernetes.io/projected/187a6cf6-849a-42be-a4a3-b248bd6fe654-kube-api-access-nvc59\") pod \"187a6cf6-849a-42be-a4a3-b248bd6fe654\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.746142 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/187a6cf6-849a-42be-a4a3-b248bd6fe654-client-ca\") pod \"187a6cf6-849a-42be-a4a3-b248bd6fe654\" (UID: \"187a6cf6-849a-42be-a4a3-b248bd6fe654\") " Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.747999 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/187a6cf6-849a-42be-a4a3-b248bd6fe654-client-ca" (OuterVolumeSpecName: "client-ca") pod "187a6cf6-849a-42be-a4a3-b248bd6fe654" (UID: "187a6cf6-849a-42be-a4a3-b248bd6fe654"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.749867 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/187a6cf6-849a-42be-a4a3-b248bd6fe654-config" (OuterVolumeSpecName: "config") pod "187a6cf6-849a-42be-a4a3-b248bd6fe654" (UID: "187a6cf6-849a-42be-a4a3-b248bd6fe654"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.758466 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/187a6cf6-849a-42be-a4a3-b248bd6fe654-kube-api-access-nvc59" (OuterVolumeSpecName: "kube-api-access-nvc59") pod "187a6cf6-849a-42be-a4a3-b248bd6fe654" (UID: "187a6cf6-849a-42be-a4a3-b248bd6fe654"). InnerVolumeSpecName "kube-api-access-nvc59". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.767426 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/187a6cf6-849a-42be-a4a3-b248bd6fe654-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "187a6cf6-849a-42be-a4a3-b248bd6fe654" (UID: "187a6cf6-849a-42be-a4a3-b248bd6fe654"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.848279 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187a6cf6-849a-42be-a4a3-b248bd6fe654-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.848321 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvc59\" (UniqueName: \"kubernetes.io/projected/187a6cf6-849a-42be-a4a3-b248bd6fe654-kube-api-access-nvc59\") on node \"crc\" DevicePath \"\"" Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.848338 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/187a6cf6-849a-42be-a4a3-b248bd6fe654-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:18:13 crc kubenswrapper[4796]: I1202 20:18:13.848353 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/187a6cf6-849a-42be-a4a3-b248bd6fe654-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.154698 4796 generic.go:334] "Generic (PLEG): container finished" podID="187a6cf6-849a-42be-a4a3-b248bd6fe654" containerID="17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588" exitCode=0 Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.154772 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" event={"ID":"187a6cf6-849a-42be-a4a3-b248bd6fe654","Type":"ContainerDied","Data":"17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588"} Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.154808 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.154844 4796 scope.go:117] "RemoveContainer" containerID="17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.154826 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-277lt" event={"ID":"187a6cf6-849a-42be-a4a3-b248bd6fe654","Type":"ContainerDied","Data":"ae67f8fd145ccdc448d2c021a51637c10b6fa86080e99543f620efdc2bc0c180"} Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.184763 4796 scope.go:117] "RemoveContainer" containerID="17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588" Dec 02 20:18:14 crc kubenswrapper[4796]: E1202 20:18:14.186527 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588\": container with ID starting with 17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588 not found: ID does not exist" containerID="17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.186607 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588"} err="failed to get container status \"17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588\": rpc error: code = NotFound desc = could not find container \"17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588\": container with ID starting with 17a937b0a8e315eeec694137d40385e7e7942510322b0a3b384dac05ba653588 not found: ID does not exist" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.212487 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-277lt"] Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.223243 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-277lt"] Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.847759 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht"] Dec 02 20:18:14 crc kubenswrapper[4796]: E1202 20:18:14.847986 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="187a6cf6-849a-42be-a4a3-b248bd6fe654" containerName="route-controller-manager" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.848001 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="187a6cf6-849a-42be-a4a3-b248bd6fe654" containerName="route-controller-manager" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.848103 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="187a6cf6-849a-42be-a4a3-b248bd6fe654" containerName="route-controller-manager" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.848584 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.855958 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.857089 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.857180 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.857359 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.857375 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.857433 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.870436 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht"] Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.970749 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df49e256-48d2-4c10-8b0c-d03f643b6293-config\") pod \"route-controller-manager-f4446cbf-qn8ht\" (UID: \"df49e256-48d2-4c10-8b0c-d03f643b6293\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.970830 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5qxt\" (UniqueName: \"kubernetes.io/projected/df49e256-48d2-4c10-8b0c-d03f643b6293-kube-api-access-d5qxt\") pod \"route-controller-manager-f4446cbf-qn8ht\" (UID: \"df49e256-48d2-4c10-8b0c-d03f643b6293\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.970887 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df49e256-48d2-4c10-8b0c-d03f643b6293-serving-cert\") pod \"route-controller-manager-f4446cbf-qn8ht\" (UID: \"df49e256-48d2-4c10-8b0c-d03f643b6293\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:14 crc kubenswrapper[4796]: I1202 20:18:14.971000 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df49e256-48d2-4c10-8b0c-d03f643b6293-client-ca\") pod \"route-controller-manager-f4446cbf-qn8ht\" (UID: \"df49e256-48d2-4c10-8b0c-d03f643b6293\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:15 crc kubenswrapper[4796]: I1202 20:18:15.072564 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df49e256-48d2-4c10-8b0c-d03f643b6293-client-ca\") pod \"route-controller-manager-f4446cbf-qn8ht\" (UID: \"df49e256-48d2-4c10-8b0c-d03f643b6293\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:15 crc kubenswrapper[4796]: I1202 20:18:15.072690 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df49e256-48d2-4c10-8b0c-d03f643b6293-config\") pod \"route-controller-manager-f4446cbf-qn8ht\" (UID: \"df49e256-48d2-4c10-8b0c-d03f643b6293\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:15 crc kubenswrapper[4796]: I1202 20:18:15.072742 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5qxt\" (UniqueName: \"kubernetes.io/projected/df49e256-48d2-4c10-8b0c-d03f643b6293-kube-api-access-d5qxt\") pod \"route-controller-manager-f4446cbf-qn8ht\" (UID: \"df49e256-48d2-4c10-8b0c-d03f643b6293\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:15 crc kubenswrapper[4796]: I1202 20:18:15.072799 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df49e256-48d2-4c10-8b0c-d03f643b6293-serving-cert\") pod \"route-controller-manager-f4446cbf-qn8ht\" (UID: \"df49e256-48d2-4c10-8b0c-d03f643b6293\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:15 crc kubenswrapper[4796]: I1202 20:18:15.078828 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df49e256-48d2-4c10-8b0c-d03f643b6293-client-ca\") pod \"route-controller-manager-f4446cbf-qn8ht\" (UID: \"df49e256-48d2-4c10-8b0c-d03f643b6293\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:15 crc kubenswrapper[4796]: I1202 20:18:15.086008 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df49e256-48d2-4c10-8b0c-d03f643b6293-config\") pod \"route-controller-manager-f4446cbf-qn8ht\" (UID: \"df49e256-48d2-4c10-8b0c-d03f643b6293\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:15 crc kubenswrapper[4796]: I1202 20:18:15.092460 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df49e256-48d2-4c10-8b0c-d03f643b6293-serving-cert\") pod \"route-controller-manager-f4446cbf-qn8ht\" (UID: \"df49e256-48d2-4c10-8b0c-d03f643b6293\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:15 crc kubenswrapper[4796]: I1202 20:18:15.110763 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5qxt\" (UniqueName: \"kubernetes.io/projected/df49e256-48d2-4c10-8b0c-d03f643b6293-kube-api-access-d5qxt\") pod \"route-controller-manager-f4446cbf-qn8ht\" (UID: \"df49e256-48d2-4c10-8b0c-d03f643b6293\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:15 crc kubenswrapper[4796]: I1202 20:18:15.171912 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:15 crc kubenswrapper[4796]: I1202 20:18:15.287249 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="187a6cf6-849a-42be-a4a3-b248bd6fe654" path="/var/lib/kubelet/pods/187a6cf6-849a-42be-a4a3-b248bd6fe654/volumes" Dec 02 20:18:15 crc kubenswrapper[4796]: I1202 20:18:15.442513 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht"] Dec 02 20:18:15 crc kubenswrapper[4796]: W1202 20:18:15.469789 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf49e256_48d2_4c10_8b0c_d03f643b6293.slice/crio-2a6fffcb46750ca9823f5c7f59f9bf76241cf8d877a7805fd122a3dce6351015 WatchSource:0}: Error finding container 2a6fffcb46750ca9823f5c7f59f9bf76241cf8d877a7805fd122a3dce6351015: Status 404 returned error can't find the container with id 2a6fffcb46750ca9823f5c7f59f9bf76241cf8d877a7805fd122a3dce6351015 Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.176653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" event={"ID":"df49e256-48d2-4c10-8b0c-d03f643b6293","Type":"ContainerStarted","Data":"45d714e475408b4ea60ad088a24cea24592c16a1b20e68598dc4594a049ae17d"} Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.176721 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" event={"ID":"df49e256-48d2-4c10-8b0c-d03f643b6293","Type":"ContainerStarted","Data":"2a6fffcb46750ca9823f5c7f59f9bf76241cf8d877a7805fd122a3dce6351015"} Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.177965 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.209910 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.223903 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-qn8ht" podStartSLOduration=3.223879055 podStartE2EDuration="3.223879055s" podCreationTimestamp="2025-12-02 20:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:18:16.217569906 +0000 UTC m=+379.220945440" watchObservedRunningTime="2025-12-02 20:18:16.223879055 +0000 UTC m=+379.227254599" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.280742 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ls4wj"] Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.281411 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.303180 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ls4wj"] Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.393174 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d30e2a8-d981-4a64-8a38-7313086c34de-trusted-ca\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.393656 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d30e2a8-d981-4a64-8a38-7313086c34de-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.393690 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d30e2a8-d981-4a64-8a38-7313086c34de-bound-sa-token\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.393743 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.393875 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d30e2a8-d981-4a64-8a38-7313086c34de-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.393941 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtgqp\" (UniqueName: \"kubernetes.io/projected/5d30e2a8-d981-4a64-8a38-7313086c34de-kube-api-access-mtgqp\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.393967 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d30e2a8-d981-4a64-8a38-7313086c34de-registry-tls\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.394207 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d30e2a8-d981-4a64-8a38-7313086c34de-registry-certificates\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.422752 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.495692 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d30e2a8-d981-4a64-8a38-7313086c34de-registry-certificates\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.495811 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d30e2a8-d981-4a64-8a38-7313086c34de-trusted-ca\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.495840 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d30e2a8-d981-4a64-8a38-7313086c34de-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.495866 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d30e2a8-d981-4a64-8a38-7313086c34de-bound-sa-token\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.495921 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d30e2a8-d981-4a64-8a38-7313086c34de-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.495943 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtgqp\" (UniqueName: \"kubernetes.io/projected/5d30e2a8-d981-4a64-8a38-7313086c34de-kube-api-access-mtgqp\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.495968 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d30e2a8-d981-4a64-8a38-7313086c34de-registry-tls\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.496736 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d30e2a8-d981-4a64-8a38-7313086c34de-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.497213 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d30e2a8-d981-4a64-8a38-7313086c34de-trusted-ca\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.497361 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d30e2a8-d981-4a64-8a38-7313086c34de-registry-certificates\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.504482 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d30e2a8-d981-4a64-8a38-7313086c34de-registry-tls\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.504786 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d30e2a8-d981-4a64-8a38-7313086c34de-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.514351 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d30e2a8-d981-4a64-8a38-7313086c34de-bound-sa-token\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.516732 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtgqp\" (UniqueName: \"kubernetes.io/projected/5d30e2a8-d981-4a64-8a38-7313086c34de-kube-api-access-mtgqp\") pod \"image-registry-66df7c8f76-ls4wj\" (UID: \"5d30e2a8-d981-4a64-8a38-7313086c34de\") " pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:16 crc kubenswrapper[4796]: I1202 20:18:16.603793 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:17 crc kubenswrapper[4796]: I1202 20:18:17.187529 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ls4wj"] Dec 02 20:18:17 crc kubenswrapper[4796]: W1202 20:18:17.189044 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d30e2a8_d981_4a64_8a38_7313086c34de.slice/crio-b5944e1c9dcdcf2c951c0e94dea85f9ab3d331b813b34ba9d6508e97622e499a WatchSource:0}: Error finding container b5944e1c9dcdcf2c951c0e94dea85f9ab3d331b813b34ba9d6508e97622e499a: Status 404 returned error can't find the container with id b5944e1c9dcdcf2c951c0e94dea85f9ab3d331b813b34ba9d6508e97622e499a Dec 02 20:18:18 crc kubenswrapper[4796]: I1202 20:18:18.196586 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" event={"ID":"5d30e2a8-d981-4a64-8a38-7313086c34de","Type":"ContainerStarted","Data":"30d4558ef17124ad72cdc006e4ba19c134ca608e50d5c8f17eaf2eacd24cd98b"} Dec 02 20:18:18 crc kubenswrapper[4796]: I1202 20:18:18.197428 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" event={"ID":"5d30e2a8-d981-4a64-8a38-7313086c34de","Type":"ContainerStarted","Data":"b5944e1c9dcdcf2c951c0e94dea85f9ab3d331b813b34ba9d6508e97622e499a"} Dec 02 20:18:18 crc kubenswrapper[4796]: I1202 20:18:18.198528 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:18 crc kubenswrapper[4796]: I1202 20:18:18.231525 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" podStartSLOduration=2.231464544 podStartE2EDuration="2.231464544s" podCreationTimestamp="2025-12-02 20:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:18:18.229542446 +0000 UTC m=+381.232917980" watchObservedRunningTime="2025-12-02 20:18:18.231464544 +0000 UTC m=+381.234840118" Dec 02 20:18:25 crc kubenswrapper[4796]: I1202 20:18:25.189941 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:18:25 crc kubenswrapper[4796]: I1202 20:18:25.190725 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:18:36 crc kubenswrapper[4796]: I1202 20:18:36.615516 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-ls4wj" Dec 02 20:18:36 crc kubenswrapper[4796]: I1202 20:18:36.717869 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rgp7h"] Dec 02 20:18:55 crc kubenswrapper[4796]: I1202 20:18:55.189426 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:18:55 crc kubenswrapper[4796]: I1202 20:18:55.190653 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:18:55 crc kubenswrapper[4796]: I1202 20:18:55.190738 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:18:55 crc kubenswrapper[4796]: I1202 20:18:55.192318 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4edf350718db085522ef32b4b6bc7016bd54791890e5d578b27f86be8c74f767"} pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 20:18:55 crc kubenswrapper[4796]: I1202 20:18:55.192568 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" containerID="cri-o://4edf350718db085522ef32b4b6bc7016bd54791890e5d578b27f86be8c74f767" gracePeriod=600 Dec 02 20:18:55 crc kubenswrapper[4796]: I1202 20:18:55.452969 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerDied","Data":"4edf350718db085522ef32b4b6bc7016bd54791890e5d578b27f86be8c74f767"} Dec 02 20:18:55 crc kubenswrapper[4796]: I1202 20:18:55.453690 4796 scope.go:117] "RemoveContainer" containerID="0ff692b0e7bcf0f9571138c9632ae4c22dddcccc147c6b0e7994e7ff6e13aa08" Dec 02 20:18:55 crc kubenswrapper[4796]: I1202 20:18:55.452887 4796 generic.go:334] "Generic (PLEG): container finished" podID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerID="4edf350718db085522ef32b4b6bc7016bd54791890e5d578b27f86be8c74f767" exitCode=0 Dec 02 20:18:56 crc kubenswrapper[4796]: I1202 20:18:56.465749 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerStarted","Data":"22d1e0fd25ff5e073a4946805e750f34011e82cb383730cfb25bb48ca777f3f4"} Dec 02 20:19:01 crc kubenswrapper[4796]: I1202 20:19:01.776088 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" podUID="e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" containerName="registry" containerID="cri-o://3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a" gracePeriod=30 Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.254134 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.382983 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-registry-tls\") pod \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.383080 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-bound-sa-token\") pod \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.383146 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-registry-certificates\") pod \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.383210 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-trusted-ca\") pod \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.383399 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.383437 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-ca-trust-extracted\") pod \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.383481 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-installation-pull-secrets\") pod \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.383558 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-825qh\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-kube-api-access-825qh\") pod \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\" (UID: \"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d\") " Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.389824 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.389859 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.397475 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.397633 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.397823 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.398500 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-kube-api-access-825qh" (OuterVolumeSpecName: "kube-api-access-825qh") pod "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d"). InnerVolumeSpecName "kube-api-access-825qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.410943 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.422363 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" (UID: "e8fe08c7-8fcf-44e2-ba0e-08630f94c06d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.486712 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-825qh\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-kube-api-access-825qh\") on node \"crc\" DevicePath \"\"" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.486756 4796 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.486770 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.486783 4796 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.486795 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.486810 4796 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.486824 4796 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.509500 4796 generic.go:334] "Generic (PLEG): container finished" podID="e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" containerID="3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a" exitCode=0 Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.509556 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" event={"ID":"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d","Type":"ContainerDied","Data":"3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a"} Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.509589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" event={"ID":"e8fe08c7-8fcf-44e2-ba0e-08630f94c06d","Type":"ContainerDied","Data":"3a2a6cbc99a251e4ccd8eac099b82f5056437992e396d29600c2345826466f30"} Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.509612 4796 scope.go:117] "RemoveContainer" containerID="3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.509723 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rgp7h" Dec 02 20:19:02 crc kubenswrapper[4796]: E1202 20:19:02.528842 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8fe08c7_8fcf_44e2_ba0e_08630f94c06d.slice\": RecentStats: unable to find data in memory cache]" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.536264 4796 scope.go:117] "RemoveContainer" containerID="3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a" Dec 02 20:19:02 crc kubenswrapper[4796]: E1202 20:19:02.536704 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a\": container with ID starting with 3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a not found: ID does not exist" containerID="3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.536760 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a"} err="failed to get container status \"3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a\": rpc error: code = NotFound desc = could not find container \"3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a\": container with ID starting with 3d85e768e80fdb6a1daa5b04f9a0618062ec1ef1734329d306849094a305f52a not found: ID does not exist" Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.552349 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rgp7h"] Dec 02 20:19:02 crc kubenswrapper[4796]: I1202 20:19:02.556224 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rgp7h"] Dec 02 20:19:03 crc kubenswrapper[4796]: I1202 20:19:03.280840 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" path="/var/lib/kubelet/pods/e8fe08c7-8fcf-44e2-ba0e-08630f94c06d/volumes" Dec 02 20:20:55 crc kubenswrapper[4796]: I1202 20:20:55.189943 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:20:55 crc kubenswrapper[4796]: I1202 20:20:55.190912 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:21:25 crc kubenswrapper[4796]: I1202 20:21:25.189750 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:21:25 crc kubenswrapper[4796]: I1202 20:21:25.190802 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:21:55 crc kubenswrapper[4796]: I1202 20:21:55.189138 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:21:55 crc kubenswrapper[4796]: I1202 20:21:55.192417 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:21:55 crc kubenswrapper[4796]: I1202 20:21:55.192510 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:21:55 crc kubenswrapper[4796]: I1202 20:21:55.193943 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22d1e0fd25ff5e073a4946805e750f34011e82cb383730cfb25bb48ca777f3f4"} pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 20:21:55 crc kubenswrapper[4796]: I1202 20:21:55.194096 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" containerID="cri-o://22d1e0fd25ff5e073a4946805e750f34011e82cb383730cfb25bb48ca777f3f4" gracePeriod=600 Dec 02 20:21:56 crc kubenswrapper[4796]: I1202 20:21:56.023672 4796 generic.go:334] "Generic (PLEG): container finished" podID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerID="22d1e0fd25ff5e073a4946805e750f34011e82cb383730cfb25bb48ca777f3f4" exitCode=0 Dec 02 20:21:56 crc kubenswrapper[4796]: I1202 20:21:56.023894 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerDied","Data":"22d1e0fd25ff5e073a4946805e750f34011e82cb383730cfb25bb48ca777f3f4"} Dec 02 20:21:56 crc kubenswrapper[4796]: I1202 20:21:56.024507 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerStarted","Data":"a547bba042a4ae0a5c7b160e108564e0b4924894b3f1b07d2ef5933a2669d856"} Dec 02 20:21:56 crc kubenswrapper[4796]: I1202 20:21:56.024545 4796 scope.go:117] "RemoveContainer" containerID="4edf350718db085522ef32b4b6bc7016bd54791890e5d578b27f86be8c74f767" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.202993 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd"] Dec 02 20:22:31 crc kubenswrapper[4796]: E1202 20:22:31.203852 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" containerName="registry" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.203868 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" containerName="registry" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.203966 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8fe08c7-8fcf-44e2-ba0e-08630f94c06d" containerName="registry" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.204678 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.206846 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.260285 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd"] Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.399083 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd\" (UID: \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.399155 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lln4t\" (UniqueName: \"kubernetes.io/projected/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-kube-api-access-lln4t\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd\" (UID: \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.399588 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd\" (UID: \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.503118 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd\" (UID: \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.503230 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lln4t\" (UniqueName: \"kubernetes.io/projected/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-kube-api-access-lln4t\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd\" (UID: \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.503361 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd\" (UID: \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.503747 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd\" (UID: \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.504138 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd\" (UID: \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.531139 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lln4t\" (UniqueName: \"kubernetes.io/projected/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-kube-api-access-lln4t\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd\" (UID: \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:31 crc kubenswrapper[4796]: I1202 20:22:31.825761 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:32 crc kubenswrapper[4796]: I1202 20:22:32.339903 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd"] Dec 02 20:22:33 crc kubenswrapper[4796]: I1202 20:22:33.298575 4796 generic.go:334] "Generic (PLEG): container finished" podID="1f8fb22d-6596-49d6-a76d-a1952a63c9a3" containerID="8c014d469154125bca978c1d010c8502f5e1de65c4a27e5ef30c38ebebbb13ff" exitCode=0 Dec 02 20:22:33 crc kubenswrapper[4796]: I1202 20:22:33.298739 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" event={"ID":"1f8fb22d-6596-49d6-a76d-a1952a63c9a3","Type":"ContainerDied","Data":"8c014d469154125bca978c1d010c8502f5e1de65c4a27e5ef30c38ebebbb13ff"} Dec 02 20:22:33 crc kubenswrapper[4796]: I1202 20:22:33.299202 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" event={"ID":"1f8fb22d-6596-49d6-a76d-a1952a63c9a3","Type":"ContainerStarted","Data":"7229bd6ac8b2f3223d2b55f375ede85a0c2e23438af7a1f60738648f40aa139e"} Dec 02 20:22:33 crc kubenswrapper[4796]: I1202 20:22:33.304495 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 20:22:35 crc kubenswrapper[4796]: I1202 20:22:35.327236 4796 generic.go:334] "Generic (PLEG): container finished" podID="1f8fb22d-6596-49d6-a76d-a1952a63c9a3" containerID="047bd0c7f5d54e4daf8a194a90dea27b2f5478f9fd225a8554c78daf9890454c" exitCode=0 Dec 02 20:22:35 crc kubenswrapper[4796]: I1202 20:22:35.327397 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" event={"ID":"1f8fb22d-6596-49d6-a76d-a1952a63c9a3","Type":"ContainerDied","Data":"047bd0c7f5d54e4daf8a194a90dea27b2f5478f9fd225a8554c78daf9890454c"} Dec 02 20:22:36 crc kubenswrapper[4796]: I1202 20:22:36.333472 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" event={"ID":"1f8fb22d-6596-49d6-a76d-a1952a63c9a3","Type":"ContainerStarted","Data":"238c4cc84c84b15cbb1e57d693dd15a1b02640ba7f819c859eb3ad646172ef17"} Dec 02 20:22:36 crc kubenswrapper[4796]: I1202 20:22:36.358998 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" podStartSLOduration=4.033685425 podStartE2EDuration="5.358970659s" podCreationTimestamp="2025-12-02 20:22:31 +0000 UTC" firstStartedPulling="2025-12-02 20:22:33.303855416 +0000 UTC m=+636.307230990" lastFinishedPulling="2025-12-02 20:22:34.62914065 +0000 UTC m=+637.632516224" observedRunningTime="2025-12-02 20:22:36.356729794 +0000 UTC m=+639.360105348" watchObservedRunningTime="2025-12-02 20:22:36.358970659 +0000 UTC m=+639.362346193" Dec 02 20:22:37 crc kubenswrapper[4796]: I1202 20:22:37.348464 4796 generic.go:334] "Generic (PLEG): container finished" podID="1f8fb22d-6596-49d6-a76d-a1952a63c9a3" containerID="238c4cc84c84b15cbb1e57d693dd15a1b02640ba7f819c859eb3ad646172ef17" exitCode=0 Dec 02 20:22:37 crc kubenswrapper[4796]: I1202 20:22:37.348569 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" event={"ID":"1f8fb22d-6596-49d6-a76d-a1952a63c9a3","Type":"ContainerDied","Data":"238c4cc84c84b15cbb1e57d693dd15a1b02640ba7f819c859eb3ad646172ef17"} Dec 02 20:22:38 crc kubenswrapper[4796]: I1202 20:22:38.640948 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:38 crc kubenswrapper[4796]: I1202 20:22:38.817776 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-util\") pod \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\" (UID: \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\") " Dec 02 20:22:38 crc kubenswrapper[4796]: I1202 20:22:38.817910 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lln4t\" (UniqueName: \"kubernetes.io/projected/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-kube-api-access-lln4t\") pod \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\" (UID: \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\") " Dec 02 20:22:38 crc kubenswrapper[4796]: I1202 20:22:38.818019 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-bundle\") pod \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\" (UID: \"1f8fb22d-6596-49d6-a76d-a1952a63c9a3\") " Dec 02 20:22:38 crc kubenswrapper[4796]: I1202 20:22:38.821957 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-bundle" (OuterVolumeSpecName: "bundle") pod "1f8fb22d-6596-49d6-a76d-a1952a63c9a3" (UID: "1f8fb22d-6596-49d6-a76d-a1952a63c9a3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:22:38 crc kubenswrapper[4796]: I1202 20:22:38.827449 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-kube-api-access-lln4t" (OuterVolumeSpecName: "kube-api-access-lln4t") pod "1f8fb22d-6596-49d6-a76d-a1952a63c9a3" (UID: "1f8fb22d-6596-49d6-a76d-a1952a63c9a3"). InnerVolumeSpecName "kube-api-access-lln4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:22:38 crc kubenswrapper[4796]: I1202 20:22:38.846317 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-util" (OuterVolumeSpecName: "util") pod "1f8fb22d-6596-49d6-a76d-a1952a63c9a3" (UID: "1f8fb22d-6596-49d6-a76d-a1952a63c9a3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:22:38 crc kubenswrapper[4796]: I1202 20:22:38.919688 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lln4t\" (UniqueName: \"kubernetes.io/projected/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-kube-api-access-lln4t\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:38 crc kubenswrapper[4796]: I1202 20:22:38.919745 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:38 crc kubenswrapper[4796]: I1202 20:22:38.919764 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f8fb22d-6596-49d6-a76d-a1952a63c9a3-util\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:39 crc kubenswrapper[4796]: I1202 20:22:39.366033 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" event={"ID":"1f8fb22d-6596-49d6-a76d-a1952a63c9a3","Type":"ContainerDied","Data":"7229bd6ac8b2f3223d2b55f375ede85a0c2e23438af7a1f60738648f40aa139e"} Dec 02 20:22:39 crc kubenswrapper[4796]: I1202 20:22:39.366089 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7229bd6ac8b2f3223d2b55f375ede85a0c2e23438af7a1f60738648f40aa139e" Dec 02 20:22:39 crc kubenswrapper[4796]: I1202 20:22:39.366183 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd" Dec 02 20:22:41 crc kubenswrapper[4796]: I1202 20:22:41.919770 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b286j"] Dec 02 20:22:41 crc kubenswrapper[4796]: I1202 20:22:41.920520 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovn-controller" containerID="cri-o://04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586" gracePeriod=30 Dec 02 20:22:41 crc kubenswrapper[4796]: I1202 20:22:41.920648 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovn-acl-logging" containerID="cri-o://b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3" gracePeriod=30 Dec 02 20:22:41 crc kubenswrapper[4796]: I1202 20:22:41.920782 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="northd" containerID="cri-o://0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769" gracePeriod=30 Dec 02 20:22:41 crc kubenswrapper[4796]: I1202 20:22:41.920873 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544" gracePeriod=30 Dec 02 20:22:41 crc kubenswrapper[4796]: I1202 20:22:41.920602 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="nbdb" containerID="cri-o://045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd" gracePeriod=30 Dec 02 20:22:41 crc kubenswrapper[4796]: I1202 20:22:41.921148 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="sbdb" containerID="cri-o://8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c" gracePeriod=30 Dec 02 20:22:41 crc kubenswrapper[4796]: I1202 20:22:41.923595 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="kube-rbac-proxy-node" containerID="cri-o://655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a" gracePeriod=30 Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.001026 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" containerID="cri-o://9fc3c8e0b04dacc030172fed3720aecf5315136a65e44ffc91188881db446fcb" gracePeriod=30 Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.386625 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m672l_03fe6ac0-1095-4336-a25c-4dd0d6e45053/kube-multus/2.log" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.387400 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m672l_03fe6ac0-1095-4336-a25c-4dd0d6e45053/kube-multus/1.log" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.387455 4796 generic.go:334] "Generic (PLEG): container finished" podID="03fe6ac0-1095-4336-a25c-4dd0d6e45053" containerID="0e6f41a60d2d2ee7c77df0b01172b537f1cac7f3c8eee3a61006aba7d2721a7f" exitCode=2 Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.387551 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m672l" event={"ID":"03fe6ac0-1095-4336-a25c-4dd0d6e45053","Type":"ContainerDied","Data":"0e6f41a60d2d2ee7c77df0b01172b537f1cac7f3c8eee3a61006aba7d2721a7f"} Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.387616 4796 scope.go:117] "RemoveContainer" containerID="e45c5c8b97a0407e37a1c8e704004a0752d9143a666d1ed8d5df6e954adc172a" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.387966 4796 scope.go:117] "RemoveContainer" containerID="0e6f41a60d2d2ee7c77df0b01172b537f1cac7f3c8eee3a61006aba7d2721a7f" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.388150 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-m672l_openshift-multus(03fe6ac0-1095-4336-a25c-4dd0d6e45053)\"" pod="openshift-multus/multus-m672l" podUID="03fe6ac0-1095-4336-a25c-4dd0d6e45053" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.390446 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovnkube-controller/3.log" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.393224 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovn-acl-logging/0.log" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.393635 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovn-controller/0.log" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.393914 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="9fc3c8e0b04dacc030172fed3720aecf5315136a65e44ffc91188881db446fcb" exitCode=0 Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.393938 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c" exitCode=0 Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.393947 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd" exitCode=0 Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.393955 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769" exitCode=0 Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.393964 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544" exitCode=0 Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.393970 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a" exitCode=0 Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.393976 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3" exitCode=143 Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.393983 4796 generic.go:334] "Generic (PLEG): container finished" podID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerID="04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586" exitCode=143 Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.394004 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"9fc3c8e0b04dacc030172fed3720aecf5315136a65e44ffc91188881db446fcb"} Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.394028 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c"} Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.394039 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd"} Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.394048 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769"} Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.394057 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544"} Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.394066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a"} Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.394074 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3"} Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.394081 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586"} Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.410406 4796 scope.go:117] "RemoveContainer" containerID="f0a2cf28b941d4231e586fdbf9805047b83aa21ad325c180afbee60a6d5227c5" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.886586 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovn-acl-logging/0.log" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.887343 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovn-controller/0.log" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.888028 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980020 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6tkbq"] Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980302 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="northd" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980323 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="northd" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980336 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980344 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980354 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980363 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980371 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovn-acl-logging" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980378 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovn-acl-logging" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980391 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="nbdb" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980399 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="nbdb" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980414 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="kube-rbac-proxy-node" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980422 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="kube-rbac-proxy-node" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980431 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8fb22d-6596-49d6-a76d-a1952a63c9a3" containerName="pull" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980438 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8fb22d-6596-49d6-a76d-a1952a63c9a3" containerName="pull" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980452 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980460 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980477 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="kubecfg-setup" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980484 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="kubecfg-setup" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980493 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="sbdb" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980500 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="sbdb" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980511 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980519 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980527 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8fb22d-6596-49d6-a76d-a1952a63c9a3" containerName="extract" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980535 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8fb22d-6596-49d6-a76d-a1952a63c9a3" containerName="extract" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980547 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovn-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980556 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovn-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980566 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8fb22d-6596-49d6-a76d-a1952a63c9a3" containerName="util" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980574 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8fb22d-6596-49d6-a76d-a1952a63c9a3" containerName="util" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980582 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980589 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980697 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="nbdb" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980713 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovn-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980725 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8fb22d-6596-49d6-a76d-a1952a63c9a3" containerName="extract" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980734 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980744 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980753 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="sbdb" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980761 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980770 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="kube-rbac-proxy-node" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980780 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980789 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980799 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovn-acl-logging" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980807 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980817 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="northd" Dec 02 20:22:42 crc kubenswrapper[4796]: E1202 20:22:42.980925 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.980934 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" containerName="ovnkube-controller" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.982992 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.995914 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-ovn\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.995998 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovnkube-script-lib\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996024 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-run-netns\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996109 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-kubelet\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996129 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-var-lib-openvswitch\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996148 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-systemd-units\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996171 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-etc-openvswitch\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996194 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-systemd\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996211 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-slash\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996237 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovnkube-config\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996288 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-run-ovn-kubernetes\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996314 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovn-node-metrics-cert\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996347 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-env-overrides\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996398 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-cni-bin\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996417 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-log-socket\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996442 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-openvswitch\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996461 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-node-log\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996491 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-cni-netd\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996536 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996561 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjjqc\" (UniqueName: \"kubernetes.io/projected/87a81d4f-9cb5-40b1-93cf-5691b915a68e-kube-api-access-cjjqc\") pod \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\" (UID: \"87a81d4f-9cb5-40b1-93cf-5691b915a68e\") " Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996621 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996671 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996711 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996750 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996774 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996788 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996842 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-slash" (OuterVolumeSpecName: "host-slash") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996872 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996810 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996915 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-log-socket" (OuterVolumeSpecName: "log-socket") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996944 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-node-log" (OuterVolumeSpecName: "node-log") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.996966 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997129 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997162 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997169 4796 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997166 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997198 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997304 4796 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997522 4796 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997545 4796 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997560 4796 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-slash\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997574 4796 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997589 4796 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-log-socket\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997602 4796 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-node-log\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997614 4796 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997629 4796 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997644 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997655 4796 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:42 crc kubenswrapper[4796]: I1202 20:22:42.997878 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.006946 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a81d4f-9cb5-40b1-93cf-5691b915a68e-kube-api-access-cjjqc" (OuterVolumeSpecName: "kube-api-access-cjjqc") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "kube-api-access-cjjqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.009642 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.022039 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "87a81d4f-9cb5-40b1-93cf-5691b915a68e" (UID: "87a81d4f-9cb5-40b1-93cf-5691b915a68e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.099609 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-run-netns\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.099668 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-log-socket\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.099861 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42dc5e60-7fc4-4998-8110-a42b64fde1a0-env-overrides\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.099939 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-systemd-units\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.099980 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-run-systemd\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100012 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-kubelet\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100090 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42dc5e60-7fc4-4998-8110-a42b64fde1a0-ovn-node-metrics-cert\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100124 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-cni-bin\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100158 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-slash\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100180 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gcln\" (UniqueName: \"kubernetes.io/projected/42dc5e60-7fc4-4998-8110-a42b64fde1a0-kube-api-access-9gcln\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100211 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100325 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-run-ovn\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100441 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-run-ovn-kubernetes\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100486 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-cni-netd\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100518 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-node-log\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100545 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/42dc5e60-7fc4-4998-8110-a42b64fde1a0-ovnkube-script-lib\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100572 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-run-openvswitch\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100590 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42dc5e60-7fc4-4998-8110-a42b64fde1a0-ovnkube-config\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100638 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-var-lib-openvswitch\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100669 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-etc-openvswitch\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100841 4796 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100860 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100876 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87a81d4f-9cb5-40b1-93cf-5691b915a68e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100890 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87a81d4f-9cb5-40b1-93cf-5691b915a68e-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100903 4796 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100916 4796 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100928 4796 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87a81d4f-9cb5-40b1-93cf-5691b915a68e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.100941 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjjqc\" (UniqueName: \"kubernetes.io/projected/87a81d4f-9cb5-40b1-93cf-5691b915a68e-kube-api-access-cjjqc\") on node \"crc\" DevicePath \"\"" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.201849 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/42dc5e60-7fc4-4998-8110-a42b64fde1a0-ovnkube-script-lib\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.201908 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-run-openvswitch\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.201926 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42dc5e60-7fc4-4998-8110-a42b64fde1a0-ovnkube-config\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.201947 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-var-lib-openvswitch\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.201966 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-etc-openvswitch\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.201988 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-run-netns\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202002 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-log-socket\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202020 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42dc5e60-7fc4-4998-8110-a42b64fde1a0-env-overrides\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202039 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-systemd-units\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202053 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-run-systemd\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202070 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-kubelet\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202091 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42dc5e60-7fc4-4998-8110-a42b64fde1a0-ovn-node-metrics-cert\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202108 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-cni-bin\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202127 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-slash\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202145 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gcln\" (UniqueName: \"kubernetes.io/projected/42dc5e60-7fc4-4998-8110-a42b64fde1a0-kube-api-access-9gcln\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202167 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202187 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-run-ovn\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202211 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-run-ovn-kubernetes\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202227 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-cni-netd\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202244 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-node-log\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202322 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-node-log\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.202987 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/42dc5e60-7fc4-4998-8110-a42b64fde1a0-ovnkube-script-lib\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.203028 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-run-openvswitch\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.203417 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42dc5e60-7fc4-4998-8110-a42b64fde1a0-ovnkube-config\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.203457 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-var-lib-openvswitch\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.203482 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-etc-openvswitch\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.203503 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-run-netns\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.203525 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-log-socket\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.203811 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42dc5e60-7fc4-4998-8110-a42b64fde1a0-env-overrides\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.203853 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-systemd-units\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.203876 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-run-systemd\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.203896 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-kubelet\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.204229 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.204279 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-run-ovn-kubernetes\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.204311 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-run-ovn\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.204315 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-cni-bin\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.204394 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-cni-netd\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.204409 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/42dc5e60-7fc4-4998-8110-a42b64fde1a0-host-slash\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.216307 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42dc5e60-7fc4-4998-8110-a42b64fde1a0-ovn-node-metrics-cert\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.238229 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gcln\" (UniqueName: \"kubernetes.io/projected/42dc5e60-7fc4-4998-8110-a42b64fde1a0-kube-api-access-9gcln\") pod \"ovnkube-node-6tkbq\" (UID: \"42dc5e60-7fc4-4998-8110-a42b64fde1a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.302059 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.403147 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovn-acl-logging/0.log" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.403719 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b286j_87a81d4f-9cb5-40b1-93cf-5691b915a68e/ovn-controller/0.log" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.404362 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.404394 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b286j" event={"ID":"87a81d4f-9cb5-40b1-93cf-5691b915a68e","Type":"ContainerDied","Data":"48cbeb53b893e44b4f08fdd88eece139f2db9f5d740890660a7abc45181d84a0"} Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.404578 4796 scope.go:117] "RemoveContainer" containerID="9fc3c8e0b04dacc030172fed3720aecf5315136a65e44ffc91188881db446fcb" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.406972 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m672l_03fe6ac0-1095-4336-a25c-4dd0d6e45053/kube-multus/2.log" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.409618 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" event={"ID":"42dc5e60-7fc4-4998-8110-a42b64fde1a0","Type":"ContainerStarted","Data":"1d09b642e6696014648b366931228fbd779fe05d92d4dfba741f3f917245386b"} Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.422034 4796 scope.go:117] "RemoveContainer" containerID="8f94dfdee88013ea62594ac91f02990be3bf0d544fb937f370baccd6720d1f5c" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.440004 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b286j"] Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.449625 4796 scope.go:117] "RemoveContainer" containerID="045bb912902864e6abcf1d71fd0e3f8dc941197ea1e572d2e2e2011b536ba7bd" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.455652 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b286j"] Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.468458 4796 scope.go:117] "RemoveContainer" containerID="0be133bdee26d23a224b8da570240c1862aa542c005d8017b2d1817c7c286769" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.492675 4796 scope.go:117] "RemoveContainer" containerID="fb637106bbcec18e199338d1deb9718195ec58a98bd3926955d08e7f77401544" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.512729 4796 scope.go:117] "RemoveContainer" containerID="655f6d42d97749b669cfa44687f71146615027902d52926d59558a788be4553a" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.533652 4796 scope.go:117] "RemoveContainer" containerID="b4b39195ca74c45e0c948605f1741eef4d655ae6ee34c0fdf6fb1159336e99c3" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.554331 4796 scope.go:117] "RemoveContainer" containerID="04e244b0768662bff1ef8a3d0ad1db2df8f9470ec043bbe007542e37d69a3586" Dec 02 20:22:43 crc kubenswrapper[4796]: I1202 20:22:43.616310 4796 scope.go:117] "RemoveContainer" containerID="7262f334d8548a3da77083a0647acfd521afac464300437a2547028e195ad7df" Dec 02 20:22:44 crc kubenswrapper[4796]: I1202 20:22:44.417644 4796 generic.go:334] "Generic (PLEG): container finished" podID="42dc5e60-7fc4-4998-8110-a42b64fde1a0" containerID="cae25d74c1461d12f97fc2ecf7a70f16f9ec387f0c9708191272dc972e5e3bf1" exitCode=0 Dec 02 20:22:44 crc kubenswrapper[4796]: I1202 20:22:44.417691 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" event={"ID":"42dc5e60-7fc4-4998-8110-a42b64fde1a0","Type":"ContainerDied","Data":"cae25d74c1461d12f97fc2ecf7a70f16f9ec387f0c9708191272dc972e5e3bf1"} Dec 02 20:22:45 crc kubenswrapper[4796]: I1202 20:22:45.270920 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a81d4f-9cb5-40b1-93cf-5691b915a68e" path="/var/lib/kubelet/pods/87a81d4f-9cb5-40b1-93cf-5691b915a68e/volumes" Dec 02 20:22:45 crc kubenswrapper[4796]: I1202 20:22:45.426586 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" event={"ID":"42dc5e60-7fc4-4998-8110-a42b64fde1a0","Type":"ContainerStarted","Data":"b8f109da0eb1bc969099f0ee1c11599d7be90f694ab110ceaa6b76a1b0a39aad"} Dec 02 20:22:45 crc kubenswrapper[4796]: I1202 20:22:45.426914 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" event={"ID":"42dc5e60-7fc4-4998-8110-a42b64fde1a0","Type":"ContainerStarted","Data":"67f6bca43d8f44ed59b1476c9e8d93dd7f47d2f08ed818ed2739fe0ad79fa19a"} Dec 02 20:22:45 crc kubenswrapper[4796]: I1202 20:22:45.426925 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" event={"ID":"42dc5e60-7fc4-4998-8110-a42b64fde1a0","Type":"ContainerStarted","Data":"d383e901544bb1194ca86e0400d49004c1cc38f614dc6b5925b8a1e10f36ae69"} Dec 02 20:22:45 crc kubenswrapper[4796]: I1202 20:22:45.426934 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" event={"ID":"42dc5e60-7fc4-4998-8110-a42b64fde1a0","Type":"ContainerStarted","Data":"d54aeea81f5150926bf2abaf81fb6f70ea9c682b07e84b262468e7578496491a"} Dec 02 20:22:45 crc kubenswrapper[4796]: I1202 20:22:45.426945 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" event={"ID":"42dc5e60-7fc4-4998-8110-a42b64fde1a0","Type":"ContainerStarted","Data":"2caf14ed72513cf10d9dcbcfe3d3438add0a90a3bfde051938318c2eaec60ccf"} Dec 02 20:22:45 crc kubenswrapper[4796]: I1202 20:22:45.426954 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" event={"ID":"42dc5e60-7fc4-4998-8110-a42b64fde1a0","Type":"ContainerStarted","Data":"8f0f23dd3489b0c0ab273c2714b446e1a2fc88a48c4c6976bc92bd625af43a80"} Dec 02 20:22:48 crc kubenswrapper[4796]: I1202 20:22:48.447983 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" event={"ID":"42dc5e60-7fc4-4998-8110-a42b64fde1a0","Type":"ContainerStarted","Data":"abecdb50c1d9791b5327c486fc3c6ce3b7a16e730d7049e584692c49f6c5d413"} Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.188018 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh"] Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.189068 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.193193 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.193517 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-j7svt" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.197845 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.323431 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w"] Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.324460 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.326338 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-6dkxk" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.327663 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.332731 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl"] Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.333681 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.386890 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lgbs\" (UniqueName: \"kubernetes.io/projected/b8d99ae4-a394-4cd6-89c6-9fe019b477df-kube-api-access-7lgbs\") pod \"obo-prometheus-operator-668cf9dfbb-wpmjh\" (UID: \"b8d99ae4-a394-4cd6-89c6-9fe019b477df\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.488364 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lgbs\" (UniqueName: \"kubernetes.io/projected/b8d99ae4-a394-4cd6-89c6-9fe019b477df-kube-api-access-7lgbs\") pod \"obo-prometheus-operator-668cf9dfbb-wpmjh\" (UID: \"b8d99ae4-a394-4cd6-89c6-9fe019b477df\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.488863 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73b60abc-2642-4af2-b0e5-263c48ca6f05-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl\" (UID: \"73b60abc-2642-4af2-b0e5-263c48ca6f05\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.488963 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c2b967c-bf5c-41e0-8d3a-763881157417-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w\" (UID: \"0c2b967c-bf5c-41e0-8d3a-763881157417\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.488993 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73b60abc-2642-4af2-b0e5-263c48ca6f05-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl\" (UID: \"73b60abc-2642-4af2-b0e5-263c48ca6f05\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.489031 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c2b967c-bf5c-41e0-8d3a-763881157417-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w\" (UID: \"0c2b967c-bf5c-41e0-8d3a-763881157417\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.527051 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lgbs\" (UniqueName: \"kubernetes.io/projected/b8d99ae4-a394-4cd6-89c6-9fe019b477df-kube-api-access-7lgbs\") pod \"obo-prometheus-operator-668cf9dfbb-wpmjh\" (UID: \"b8d99ae4-a394-4cd6-89c6-9fe019b477df\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.534742 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-dd8nq"] Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.538094 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.541668 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-tjs9k" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.549190 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.590441 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73b60abc-2642-4af2-b0e5-263c48ca6f05-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl\" (UID: \"73b60abc-2642-4af2-b0e5-263c48ca6f05\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.591048 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c2b967c-bf5c-41e0-8d3a-763881157417-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w\" (UID: \"0c2b967c-bf5c-41e0-8d3a-763881157417\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.591077 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73b60abc-2642-4af2-b0e5-263c48ca6f05-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl\" (UID: \"73b60abc-2642-4af2-b0e5-263c48ca6f05\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.591118 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c2b967c-bf5c-41e0-8d3a-763881157417-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w\" (UID: \"0c2b967c-bf5c-41e0-8d3a-763881157417\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.594450 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73b60abc-2642-4af2-b0e5-263c48ca6f05-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl\" (UID: \"73b60abc-2642-4af2-b0e5-263c48ca6f05\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.594729 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c2b967c-bf5c-41e0-8d3a-763881157417-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w\" (UID: \"0c2b967c-bf5c-41e0-8d3a-763881157417\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.595372 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73b60abc-2642-4af2-b0e5-263c48ca6f05-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl\" (UID: \"73b60abc-2642-4af2-b0e5-263c48ca6f05\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.599506 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c2b967c-bf5c-41e0-8d3a-763881157417-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w\" (UID: \"0c2b967c-bf5c-41e0-8d3a-763881157417\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.627033 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-ww4jg"] Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.628208 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.630815 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-5w6mr" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.641438 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.649004 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.682591 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators_73b60abc-2642-4af2-b0e5-263c48ca6f05_0(5939593f03ba9565a9fa502ba5ce1a64118f532824dbfd723246acf1f73ccc50): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.682715 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators_73b60abc-2642-4af2-b0e5-263c48ca6f05_0(5939593f03ba9565a9fa502ba5ce1a64118f532824dbfd723246acf1f73ccc50): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.682742 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators_73b60abc-2642-4af2-b0e5-263c48ca6f05_0(5939593f03ba9565a9fa502ba5ce1a64118f532824dbfd723246acf1f73ccc50): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.682804 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators(73b60abc-2642-4af2-b0e5-263c48ca6f05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators(73b60abc-2642-4af2-b0e5-263c48ca6f05)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators_73b60abc-2642-4af2-b0e5-263c48ca6f05_0(5939593f03ba9565a9fa502ba5ce1a64118f532824dbfd723246acf1f73ccc50): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" podUID="73b60abc-2642-4af2-b0e5-263c48ca6f05" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.684930 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators_0c2b967c-bf5c-41e0-8d3a-763881157417_0(71e4f2a9f4df5087179ad5018d97c5e6596b6357958592c6ff7f26073becf590): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.684965 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators_0c2b967c-bf5c-41e0-8d3a-763881157417_0(71e4f2a9f4df5087179ad5018d97c5e6596b6357958592c6ff7f26073becf590): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.684985 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators_0c2b967c-bf5c-41e0-8d3a-763881157417_0(71e4f2a9f4df5087179ad5018d97c5e6596b6357958592c6ff7f26073becf590): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.685015 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators(0c2b967c-bf5c-41e0-8d3a-763881157417)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators(0c2b967c-bf5c-41e0-8d3a-763881157417)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators_0c2b967c-bf5c-41e0-8d3a-763881157417_0(71e4f2a9f4df5087179ad5018d97c5e6596b6357958592c6ff7f26073becf590): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" podUID="0c2b967c-bf5c-41e0-8d3a-763881157417" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.692844 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-272gr\" (UniqueName: \"kubernetes.io/projected/87139126-df78-43da-984c-e8f633bc52a9-kube-api-access-272gr\") pod \"perses-operator-5446b9c989-ww4jg\" (UID: \"87139126-df78-43da-984c-e8f633bc52a9\") " pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.692979 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqtgl\" (UniqueName: \"kubernetes.io/projected/e3d237b8-9c00-4ed9-b441-822ba51a7ed5-kube-api-access-wqtgl\") pod \"observability-operator-d8bb48f5d-dd8nq\" (UID: \"e3d237b8-9c00-4ed9-b441-822ba51a7ed5\") " pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.693051 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/87139126-df78-43da-984c-e8f633bc52a9-openshift-service-ca\") pod \"perses-operator-5446b9c989-ww4jg\" (UID: \"87139126-df78-43da-984c-e8f633bc52a9\") " pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.693143 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e3d237b8-9c00-4ed9-b441-822ba51a7ed5-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-dd8nq\" (UID: \"e3d237b8-9c00-4ed9-b441-822ba51a7ed5\") " pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.794555 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e3d237b8-9c00-4ed9-b441-822ba51a7ed5-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-dd8nq\" (UID: \"e3d237b8-9c00-4ed9-b441-822ba51a7ed5\") " pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.794665 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-272gr\" (UniqueName: \"kubernetes.io/projected/87139126-df78-43da-984c-e8f633bc52a9-kube-api-access-272gr\") pod \"perses-operator-5446b9c989-ww4jg\" (UID: \"87139126-df78-43da-984c-e8f633bc52a9\") " pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.794705 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqtgl\" (UniqueName: \"kubernetes.io/projected/e3d237b8-9c00-4ed9-b441-822ba51a7ed5-kube-api-access-wqtgl\") pod \"observability-operator-d8bb48f5d-dd8nq\" (UID: \"e3d237b8-9c00-4ed9-b441-822ba51a7ed5\") " pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.794733 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/87139126-df78-43da-984c-e8f633bc52a9-openshift-service-ca\") pod \"perses-operator-5446b9c989-ww4jg\" (UID: \"87139126-df78-43da-984c-e8f633bc52a9\") " pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.796285 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/87139126-df78-43da-984c-e8f633bc52a9-openshift-service-ca\") pod \"perses-operator-5446b9c989-ww4jg\" (UID: \"87139126-df78-43da-984c-e8f633bc52a9\") " pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.800881 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e3d237b8-9c00-4ed9-b441-822ba51a7ed5-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-dd8nq\" (UID: \"e3d237b8-9c00-4ed9-b441-822ba51a7ed5\") " pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.805672 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.815089 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-272gr\" (UniqueName: \"kubernetes.io/projected/87139126-df78-43da-984c-e8f633bc52a9-kube-api-access-272gr\") pod \"perses-operator-5446b9c989-ww4jg\" (UID: \"87139126-df78-43da-984c-e8f633bc52a9\") " pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.817096 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqtgl\" (UniqueName: \"kubernetes.io/projected/e3d237b8-9c00-4ed9-b441-822ba51a7ed5-kube-api-access-wqtgl\") pod \"observability-operator-d8bb48f5d-dd8nq\" (UID: \"e3d237b8-9c00-4ed9-b441-822ba51a7ed5\") " pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.841735 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators_b8d99ae4-a394-4cd6-89c6-9fe019b477df_0(c2de261dc91702762d36be49df3af2b7e12bd757ffd0ecca6128062c53be6aac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.841822 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators_b8d99ae4-a394-4cd6-89c6-9fe019b477df_0(c2de261dc91702762d36be49df3af2b7e12bd757ffd0ecca6128062c53be6aac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.841855 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators_b8d99ae4-a394-4cd6-89c6-9fe019b477df_0(c2de261dc91702762d36be49df3af2b7e12bd757ffd0ecca6128062c53be6aac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.841911 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators(b8d99ae4-a394-4cd6-89c6-9fe019b477df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators(b8d99ae4-a394-4cd6-89c6-9fe019b477df)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators_b8d99ae4-a394-4cd6-89c6-9fe019b477df_0(c2de261dc91702762d36be49df3af2b7e12bd757ffd0ecca6128062c53be6aac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" podUID="b8d99ae4-a394-4cd6-89c6-9fe019b477df" Dec 02 20:22:49 crc kubenswrapper[4796]: I1202 20:22:49.866597 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.898337 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-dd8nq_openshift-operators_e3d237b8-9c00-4ed9-b441-822ba51a7ed5_0(ff95820a65bd1faadbe962025a3a98ddc7b538f0fa19783904012a4efb077644): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.898509 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-dd8nq_openshift-operators_e3d237b8-9c00-4ed9-b441-822ba51a7ed5_0(ff95820a65bd1faadbe962025a3a98ddc7b538f0fa19783904012a4efb077644): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.898542 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-dd8nq_openshift-operators_e3d237b8-9c00-4ed9-b441-822ba51a7ed5_0(ff95820a65bd1faadbe962025a3a98ddc7b538f0fa19783904012a4efb077644): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:49 crc kubenswrapper[4796]: E1202 20:22:49.898631 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-dd8nq_openshift-operators(e3d237b8-9c00-4ed9-b441-822ba51a7ed5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-dd8nq_openshift-operators(e3d237b8-9c00-4ed9-b441-822ba51a7ed5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-dd8nq_openshift-operators_e3d237b8-9c00-4ed9-b441-822ba51a7ed5_0(ff95820a65bd1faadbe962025a3a98ddc7b538f0fa19783904012a4efb077644): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" podUID="e3d237b8-9c00-4ed9-b441-822ba51a7ed5" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.013796 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.039797 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-ww4jg_openshift-operators_87139126-df78-43da-984c-e8f633bc52a9_0(3b3ee56d8da5550962755f7433c8681bc753e5adc8d2d7f6a303c53ea735553f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.039872 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-ww4jg_openshift-operators_87139126-df78-43da-984c-e8f633bc52a9_0(3b3ee56d8da5550962755f7433c8681bc753e5adc8d2d7f6a303c53ea735553f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.039894 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-ww4jg_openshift-operators_87139126-df78-43da-984c-e8f633bc52a9_0(3b3ee56d8da5550962755f7433c8681bc753e5adc8d2d7f6a303c53ea735553f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.039943 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-ww4jg_openshift-operators(87139126-df78-43da-984c-e8f633bc52a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-ww4jg_openshift-operators(87139126-df78-43da-984c-e8f633bc52a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-ww4jg_openshift-operators_87139126-df78-43da-984c-e8f633bc52a9_0(3b3ee56d8da5550962755f7433c8681bc753e5adc8d2d7f6a303c53ea735553f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" podUID="87139126-df78-43da-984c-e8f633bc52a9" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.478576 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" event={"ID":"42dc5e60-7fc4-4998-8110-a42b64fde1a0","Type":"ContainerStarted","Data":"799c757d90359970c80df62bc7a170c6930a71d3d20d2efe1d14385d929d271b"} Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.479080 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.479094 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.510317 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.558085 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" podStartSLOduration=8.558055678 podStartE2EDuration="8.558055678s" podCreationTimestamp="2025-12-02 20:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:22:50.519926312 +0000 UTC m=+653.523301846" watchObservedRunningTime="2025-12-02 20:22:50.558055678 +0000 UTC m=+653.561431212" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.836637 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh"] Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.836768 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.837221 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.842061 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl"] Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.842240 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.842954 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.845213 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w"] Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.845342 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.845799 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.848424 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-dd8nq"] Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.848554 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.849204 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.866961 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-ww4jg"] Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.867055 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:50 crc kubenswrapper[4796]: I1202 20:22:50.867493 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.882100 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators_b8d99ae4-a394-4cd6-89c6-9fe019b477df_0(1089e767b59f7361bd1ad20c52dfc1dc48be95a9a7df90018bf81f637a3e6ea2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.882188 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators_b8d99ae4-a394-4cd6-89c6-9fe019b477df_0(1089e767b59f7361bd1ad20c52dfc1dc48be95a9a7df90018bf81f637a3e6ea2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.882212 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators_b8d99ae4-a394-4cd6-89c6-9fe019b477df_0(1089e767b59f7361bd1ad20c52dfc1dc48be95a9a7df90018bf81f637a3e6ea2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.882282 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators(b8d99ae4-a394-4cd6-89c6-9fe019b477df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators(b8d99ae4-a394-4cd6-89c6-9fe019b477df)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators_b8d99ae4-a394-4cd6-89c6-9fe019b477df_0(1089e767b59f7361bd1ad20c52dfc1dc48be95a9a7df90018bf81f637a3e6ea2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" podUID="b8d99ae4-a394-4cd6-89c6-9fe019b477df" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.887613 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators_73b60abc-2642-4af2-b0e5-263c48ca6f05_0(32a3249a859c6db664367319565c08faddd292b12e5325f6241c566d9c571a56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.887697 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators_73b60abc-2642-4af2-b0e5-263c48ca6f05_0(32a3249a859c6db664367319565c08faddd292b12e5325f6241c566d9c571a56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.887724 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators_73b60abc-2642-4af2-b0e5-263c48ca6f05_0(32a3249a859c6db664367319565c08faddd292b12e5325f6241c566d9c571a56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.887794 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators(73b60abc-2642-4af2-b0e5-263c48ca6f05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators(73b60abc-2642-4af2-b0e5-263c48ca6f05)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators_73b60abc-2642-4af2-b0e5-263c48ca6f05_0(32a3249a859c6db664367319565c08faddd292b12e5325f6241c566d9c571a56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" podUID="73b60abc-2642-4af2-b0e5-263c48ca6f05" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.934078 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-dd8nq_openshift-operators_e3d237b8-9c00-4ed9-b441-822ba51a7ed5_0(b70bf2806ee83dace8cc43bf2786ecc4dbdee6cbaedf6b6ae237a86ce42c0d7b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.934180 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-dd8nq_openshift-operators_e3d237b8-9c00-4ed9-b441-822ba51a7ed5_0(b70bf2806ee83dace8cc43bf2786ecc4dbdee6cbaedf6b6ae237a86ce42c0d7b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.934209 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-dd8nq_openshift-operators_e3d237b8-9c00-4ed9-b441-822ba51a7ed5_0(b70bf2806ee83dace8cc43bf2786ecc4dbdee6cbaedf6b6ae237a86ce42c0d7b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.934289 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-dd8nq_openshift-operators(e3d237b8-9c00-4ed9-b441-822ba51a7ed5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-dd8nq_openshift-operators(e3d237b8-9c00-4ed9-b441-822ba51a7ed5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-dd8nq_openshift-operators_e3d237b8-9c00-4ed9-b441-822ba51a7ed5_0(b70bf2806ee83dace8cc43bf2786ecc4dbdee6cbaedf6b6ae237a86ce42c0d7b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" podUID="e3d237b8-9c00-4ed9-b441-822ba51a7ed5" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.942817 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators_0c2b967c-bf5c-41e0-8d3a-763881157417_0(758712c5e090224ab673e7d5855ce648bcf14a34207b47294184681f6d6eecd7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.942907 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators_0c2b967c-bf5c-41e0-8d3a-763881157417_0(758712c5e090224ab673e7d5855ce648bcf14a34207b47294184681f6d6eecd7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.942935 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators_0c2b967c-bf5c-41e0-8d3a-763881157417_0(758712c5e090224ab673e7d5855ce648bcf14a34207b47294184681f6d6eecd7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.942999 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators(0c2b967c-bf5c-41e0-8d3a-763881157417)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators(0c2b967c-bf5c-41e0-8d3a-763881157417)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators_0c2b967c-bf5c-41e0-8d3a-763881157417_0(758712c5e090224ab673e7d5855ce648bcf14a34207b47294184681f6d6eecd7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" podUID="0c2b967c-bf5c-41e0-8d3a-763881157417" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.961578 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-ww4jg_openshift-operators_87139126-df78-43da-984c-e8f633bc52a9_0(9d02d8f7c2f1f16f6154bf92741dda8f5b4382d7c76660afd009a2f0676d2daf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.961641 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-ww4jg_openshift-operators_87139126-df78-43da-984c-e8f633bc52a9_0(9d02d8f7c2f1f16f6154bf92741dda8f5b4382d7c76660afd009a2f0676d2daf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.961667 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-ww4jg_openshift-operators_87139126-df78-43da-984c-e8f633bc52a9_0(9d02d8f7c2f1f16f6154bf92741dda8f5b4382d7c76660afd009a2f0676d2daf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:22:50 crc kubenswrapper[4796]: E1202 20:22:50.961713 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-ww4jg_openshift-operators(87139126-df78-43da-984c-e8f633bc52a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-ww4jg_openshift-operators(87139126-df78-43da-984c-e8f633bc52a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-ww4jg_openshift-operators_87139126-df78-43da-984c-e8f633bc52a9_0(9d02d8f7c2f1f16f6154bf92741dda8f5b4382d7c76660afd009a2f0676d2daf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" podUID="87139126-df78-43da-984c-e8f633bc52a9" Dec 02 20:22:51 crc kubenswrapper[4796]: I1202 20:22:51.483713 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:51 crc kubenswrapper[4796]: I1202 20:22:51.528295 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:22:57 crc kubenswrapper[4796]: I1202 20:22:57.269411 4796 scope.go:117] "RemoveContainer" containerID="0e6f41a60d2d2ee7c77df0b01172b537f1cac7f3c8eee3a61006aba7d2721a7f" Dec 02 20:22:57 crc kubenswrapper[4796]: E1202 20:22:57.270401 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-m672l_openshift-multus(03fe6ac0-1095-4336-a25c-4dd0d6e45053)\"" pod="openshift-multus/multus-m672l" podUID="03fe6ac0-1095-4336-a25c-4dd0d6e45053" Dec 02 20:23:02 crc kubenswrapper[4796]: I1202 20:23:02.264428 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:23:02 crc kubenswrapper[4796]: I1202 20:23:02.266154 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:23:02 crc kubenswrapper[4796]: E1202 20:23:02.295861 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators_b8d99ae4-a394-4cd6-89c6-9fe019b477df_0(13be9d0c92507bc66ae27e846c41d215f0d6f67a86796cc4954bd5678880c839): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:23:02 crc kubenswrapper[4796]: E1202 20:23:02.295973 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators_b8d99ae4-a394-4cd6-89c6-9fe019b477df_0(13be9d0c92507bc66ae27e846c41d215f0d6f67a86796cc4954bd5678880c839): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:23:02 crc kubenswrapper[4796]: E1202 20:23:02.296012 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators_b8d99ae4-a394-4cd6-89c6-9fe019b477df_0(13be9d0c92507bc66ae27e846c41d215f0d6f67a86796cc4954bd5678880c839): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:23:02 crc kubenswrapper[4796]: E1202 20:23:02.296098 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators(b8d99ae4-a394-4cd6-89c6-9fe019b477df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators(b8d99ae4-a394-4cd6-89c6-9fe019b477df)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-wpmjh_openshift-operators_b8d99ae4-a394-4cd6-89c6-9fe019b477df_0(13be9d0c92507bc66ae27e846c41d215f0d6f67a86796cc4954bd5678880c839): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" podUID="b8d99ae4-a394-4cd6-89c6-9fe019b477df" Dec 02 20:23:04 crc kubenswrapper[4796]: I1202 20:23:04.265165 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:23:04 crc kubenswrapper[4796]: I1202 20:23:04.265183 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:23:04 crc kubenswrapper[4796]: I1202 20:23:04.265304 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:23:04 crc kubenswrapper[4796]: I1202 20:23:04.266438 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:23:04 crc kubenswrapper[4796]: I1202 20:23:04.266461 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:23:04 crc kubenswrapper[4796]: I1202 20:23:04.266461 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:23:04 crc kubenswrapper[4796]: E1202 20:23:04.324627 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators_73b60abc-2642-4af2-b0e5-263c48ca6f05_0(2d7cdb9275716eb2df8a0a8371fec0a337f0597d4e132bd29527fc4cb882f356): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:23:04 crc kubenswrapper[4796]: E1202 20:23:04.324806 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators_73b60abc-2642-4af2-b0e5-263c48ca6f05_0(2d7cdb9275716eb2df8a0a8371fec0a337f0597d4e132bd29527fc4cb882f356): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:23:04 crc kubenswrapper[4796]: E1202 20:23:04.324910 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators_73b60abc-2642-4af2-b0e5-263c48ca6f05_0(2d7cdb9275716eb2df8a0a8371fec0a337f0597d4e132bd29527fc4cb882f356): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:23:04 crc kubenswrapper[4796]: E1202 20:23:04.325072 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators(73b60abc-2642-4af2-b0e5-263c48ca6f05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators(73b60abc-2642-4af2-b0e5-263c48ca6f05)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_openshift-operators_73b60abc-2642-4af2-b0e5-263c48ca6f05_0(2d7cdb9275716eb2df8a0a8371fec0a337f0597d4e132bd29527fc4cb882f356): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" podUID="73b60abc-2642-4af2-b0e5-263c48ca6f05" Dec 02 20:23:04 crc kubenswrapper[4796]: E1202 20:23:04.332166 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-ww4jg_openshift-operators_87139126-df78-43da-984c-e8f633bc52a9_0(a98c5bda5361ab9da9a44d4067b0c90d0cb9c0f3d3b3678c84dc60713ebc6127): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:23:04 crc kubenswrapper[4796]: E1202 20:23:04.332321 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-ww4jg_openshift-operators_87139126-df78-43da-984c-e8f633bc52a9_0(a98c5bda5361ab9da9a44d4067b0c90d0cb9c0f3d3b3678c84dc60713ebc6127): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:23:04 crc kubenswrapper[4796]: E1202 20:23:04.332414 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-ww4jg_openshift-operators_87139126-df78-43da-984c-e8f633bc52a9_0(a98c5bda5361ab9da9a44d4067b0c90d0cb9c0f3d3b3678c84dc60713ebc6127): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:23:04 crc kubenswrapper[4796]: E1202 20:23:04.332539 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-ww4jg_openshift-operators(87139126-df78-43da-984c-e8f633bc52a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-ww4jg_openshift-operators(87139126-df78-43da-984c-e8f633bc52a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-ww4jg_openshift-operators_87139126-df78-43da-984c-e8f633bc52a9_0(a98c5bda5361ab9da9a44d4067b0c90d0cb9c0f3d3b3678c84dc60713ebc6127): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" podUID="87139126-df78-43da-984c-e8f633bc52a9" Dec 02 20:23:04 crc kubenswrapper[4796]: E1202 20:23:04.341595 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators_0c2b967c-bf5c-41e0-8d3a-763881157417_0(74a223320afe00f0cdc44e39c2ddd0b99b3817e68716d00a67da59357e4d8115): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:23:04 crc kubenswrapper[4796]: E1202 20:23:04.341682 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators_0c2b967c-bf5c-41e0-8d3a-763881157417_0(74a223320afe00f0cdc44e39c2ddd0b99b3817e68716d00a67da59357e4d8115): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:23:04 crc kubenswrapper[4796]: E1202 20:23:04.341714 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators_0c2b967c-bf5c-41e0-8d3a-763881157417_0(74a223320afe00f0cdc44e39c2ddd0b99b3817e68716d00a67da59357e4d8115): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:23:04 crc kubenswrapper[4796]: E1202 20:23:04.341784 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators(0c2b967c-bf5c-41e0-8d3a-763881157417)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators(0c2b967c-bf5c-41e0-8d3a-763881157417)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_openshift-operators_0c2b967c-bf5c-41e0-8d3a-763881157417_0(74a223320afe00f0cdc44e39c2ddd0b99b3817e68716d00a67da59357e4d8115): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" podUID="0c2b967c-bf5c-41e0-8d3a-763881157417" Dec 02 20:23:05 crc kubenswrapper[4796]: I1202 20:23:05.264931 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:23:05 crc kubenswrapper[4796]: I1202 20:23:05.265788 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:23:05 crc kubenswrapper[4796]: E1202 20:23:05.296168 4796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-dd8nq_openshift-operators_e3d237b8-9c00-4ed9-b441-822ba51a7ed5_0(94503de8eb7475844b06bfa0dcb29555c379a4ecf446a8a553ce07acf9a13b21): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 20:23:05 crc kubenswrapper[4796]: E1202 20:23:05.296237 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-dd8nq_openshift-operators_e3d237b8-9c00-4ed9-b441-822ba51a7ed5_0(94503de8eb7475844b06bfa0dcb29555c379a4ecf446a8a553ce07acf9a13b21): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:23:05 crc kubenswrapper[4796]: E1202 20:23:05.296308 4796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-dd8nq_openshift-operators_e3d237b8-9c00-4ed9-b441-822ba51a7ed5_0(94503de8eb7475844b06bfa0dcb29555c379a4ecf446a8a553ce07acf9a13b21): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:23:05 crc kubenswrapper[4796]: E1202 20:23:05.296359 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-dd8nq_openshift-operators(e3d237b8-9c00-4ed9-b441-822ba51a7ed5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-dd8nq_openshift-operators(e3d237b8-9c00-4ed9-b441-822ba51a7ed5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-dd8nq_openshift-operators_e3d237b8-9c00-4ed9-b441-822ba51a7ed5_0(94503de8eb7475844b06bfa0dcb29555c379a4ecf446a8a553ce07acf9a13b21): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" podUID="e3d237b8-9c00-4ed9-b441-822ba51a7ed5" Dec 02 20:23:11 crc kubenswrapper[4796]: I1202 20:23:11.265203 4796 scope.go:117] "RemoveContainer" containerID="0e6f41a60d2d2ee7c77df0b01172b537f1cac7f3c8eee3a61006aba7d2721a7f" Dec 02 20:23:11 crc kubenswrapper[4796]: I1202 20:23:11.623982 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m672l_03fe6ac0-1095-4336-a25c-4dd0d6e45053/kube-multus/2.log" Dec 02 20:23:11 crc kubenswrapper[4796]: I1202 20:23:11.624462 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m672l" event={"ID":"03fe6ac0-1095-4336-a25c-4dd0d6e45053","Type":"ContainerStarted","Data":"c96109640fed160477fe4a19c9c4b969f446e2a4dbe2b9847e6115aaaca10c11"} Dec 02 20:23:13 crc kubenswrapper[4796]: I1202 20:23:13.336865 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6tkbq" Dec 02 20:23:14 crc kubenswrapper[4796]: I1202 20:23:14.265113 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:23:14 crc kubenswrapper[4796]: I1202 20:23:14.265965 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" Dec 02 20:23:14 crc kubenswrapper[4796]: I1202 20:23:14.753111 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh"] Dec 02 20:23:14 crc kubenswrapper[4796]: W1202 20:23:14.770825 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8d99ae4_a394_4cd6_89c6_9fe019b477df.slice/crio-1286fec67fcef4487b328ef2d29ecb4ca39146e8f8ea03bb99a6c978d5fdcc03 WatchSource:0}: Error finding container 1286fec67fcef4487b328ef2d29ecb4ca39146e8f8ea03bb99a6c978d5fdcc03: Status 404 returned error can't find the container with id 1286fec67fcef4487b328ef2d29ecb4ca39146e8f8ea03bb99a6c978d5fdcc03 Dec 02 20:23:15 crc kubenswrapper[4796]: I1202 20:23:15.649555 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" event={"ID":"b8d99ae4-a394-4cd6-89c6-9fe019b477df","Type":"ContainerStarted","Data":"1286fec67fcef4487b328ef2d29ecb4ca39146e8f8ea03bb99a6c978d5fdcc03"} Dec 02 20:23:16 crc kubenswrapper[4796]: I1202 20:23:16.264385 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:23:16 crc kubenswrapper[4796]: I1202 20:23:16.265133 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" Dec 02 20:23:16 crc kubenswrapper[4796]: I1202 20:23:16.524850 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl"] Dec 02 20:23:16 crc kubenswrapper[4796]: W1202 20:23:16.532188 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73b60abc_2642_4af2_b0e5_263c48ca6f05.slice/crio-f34a168b86f19dcad535cf97a6e3ee419d3cfc6bbd2c78f6e79b4ecc88334efe WatchSource:0}: Error finding container f34a168b86f19dcad535cf97a6e3ee419d3cfc6bbd2c78f6e79b4ecc88334efe: Status 404 returned error can't find the container with id f34a168b86f19dcad535cf97a6e3ee419d3cfc6bbd2c78f6e79b4ecc88334efe Dec 02 20:23:16 crc kubenswrapper[4796]: I1202 20:23:16.656018 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" event={"ID":"73b60abc-2642-4af2-b0e5-263c48ca6f05","Type":"ContainerStarted","Data":"f34a168b86f19dcad535cf97a6e3ee419d3cfc6bbd2c78f6e79b4ecc88334efe"} Dec 02 20:23:17 crc kubenswrapper[4796]: I1202 20:23:17.269357 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:23:17 crc kubenswrapper[4796]: I1202 20:23:17.273908 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:23:17 crc kubenswrapper[4796]: I1202 20:23:17.485444 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-dd8nq"] Dec 02 20:23:17 crc kubenswrapper[4796]: W1202 20:23:17.504783 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3d237b8_9c00_4ed9_b441_822ba51a7ed5.slice/crio-fb60dcbc4abf4cf63f705ca7cb8cb881d3c18480c3fb85f273526eacc9610406 WatchSource:0}: Error finding container fb60dcbc4abf4cf63f705ca7cb8cb881d3c18480c3fb85f273526eacc9610406: Status 404 returned error can't find the container with id fb60dcbc4abf4cf63f705ca7cb8cb881d3c18480c3fb85f273526eacc9610406 Dec 02 20:23:17 crc kubenswrapper[4796]: I1202 20:23:17.664077 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" event={"ID":"e3d237b8-9c00-4ed9-b441-822ba51a7ed5","Type":"ContainerStarted","Data":"fb60dcbc4abf4cf63f705ca7cb8cb881d3c18480c3fb85f273526eacc9610406"} Dec 02 20:23:18 crc kubenswrapper[4796]: I1202 20:23:18.264958 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:23:18 crc kubenswrapper[4796]: I1202 20:23:18.265709 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:23:18 crc kubenswrapper[4796]: I1202 20:23:18.541619 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-ww4jg"] Dec 02 20:23:19 crc kubenswrapper[4796]: I1202 20:23:19.265066 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:23:19 crc kubenswrapper[4796]: I1202 20:23:19.265602 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" Dec 02 20:23:22 crc kubenswrapper[4796]: W1202 20:23:22.148457 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87139126_df78_43da_984c_e8f633bc52a9.slice/crio-f61ce27c293727c97737b6b8ad8208d6fe832d59aeff67c2bd7b8588fe7423ff WatchSource:0}: Error finding container f61ce27c293727c97737b6b8ad8208d6fe832d59aeff67c2bd7b8588fe7423ff: Status 404 returned error can't find the container with id f61ce27c293727c97737b6b8ad8208d6fe832d59aeff67c2bd7b8588fe7423ff Dec 02 20:23:22 crc kubenswrapper[4796]: I1202 20:23:22.706956 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" event={"ID":"87139126-df78-43da-984c-e8f633bc52a9","Type":"ContainerStarted","Data":"f61ce27c293727c97737b6b8ad8208d6fe832d59aeff67c2bd7b8588fe7423ff"} Dec 02 20:23:23 crc kubenswrapper[4796]: I1202 20:23:23.051682 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w"] Dec 02 20:23:23 crc kubenswrapper[4796]: W1202 20:23:23.057001 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c2b967c_bf5c_41e0_8d3a_763881157417.slice/crio-c2a4ca07d1542011c8c2e102062cfe4d4c0891cdba102981ac6dfa141bb3e4b9 WatchSource:0}: Error finding container c2a4ca07d1542011c8c2e102062cfe4d4c0891cdba102981ac6dfa141bb3e4b9: Status 404 returned error can't find the container with id c2a4ca07d1542011c8c2e102062cfe4d4c0891cdba102981ac6dfa141bb3e4b9 Dec 02 20:23:23 crc kubenswrapper[4796]: I1202 20:23:23.713893 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" event={"ID":"b8d99ae4-a394-4cd6-89c6-9fe019b477df","Type":"ContainerStarted","Data":"08da4386a103b5a0f7fd619e4648da83853ded17b8a913ef8546363d0046df87"} Dec 02 20:23:23 crc kubenswrapper[4796]: I1202 20:23:23.716652 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" event={"ID":"0c2b967c-bf5c-41e0-8d3a-763881157417","Type":"ContainerStarted","Data":"d27e20ddcd336a1b70356a3bd9c73e324652ee89b161f0e21d26e4309e831e84"} Dec 02 20:23:23 crc kubenswrapper[4796]: I1202 20:23:23.716688 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" event={"ID":"0c2b967c-bf5c-41e0-8d3a-763881157417","Type":"ContainerStarted","Data":"c2a4ca07d1542011c8c2e102062cfe4d4c0891cdba102981ac6dfa141bb3e4b9"} Dec 02 20:23:23 crc kubenswrapper[4796]: I1202 20:23:23.720125 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" event={"ID":"73b60abc-2642-4af2-b0e5-263c48ca6f05","Type":"ContainerStarted","Data":"e91f7472bafc26b006e239daa22b9bb88a92d6942d3cf0d9e6e8b33029259987"} Dec 02 20:23:23 crc kubenswrapper[4796]: I1202 20:23:23.736174 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-wpmjh" podStartSLOduration=26.686584299 podStartE2EDuration="34.736112161s" podCreationTimestamp="2025-12-02 20:22:49 +0000 UTC" firstStartedPulling="2025-12-02 20:23:14.775166505 +0000 UTC m=+677.778542049" lastFinishedPulling="2025-12-02 20:23:22.824694377 +0000 UTC m=+685.828069911" observedRunningTime="2025-12-02 20:23:23.731222941 +0000 UTC m=+686.734598475" watchObservedRunningTime="2025-12-02 20:23:23.736112161 +0000 UTC m=+686.739487695" Dec 02 20:23:23 crc kubenswrapper[4796]: I1202 20:23:23.753381 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w" podStartSLOduration=34.753363375 podStartE2EDuration="34.753363375s" podCreationTimestamp="2025-12-02 20:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:23:23.751964111 +0000 UTC m=+686.755339645" watchObservedRunningTime="2025-12-02 20:23:23.753363375 +0000 UTC m=+686.756738909" Dec 02 20:23:23 crc kubenswrapper[4796]: I1202 20:23:23.771686 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl" podStartSLOduration=28.463379088 podStartE2EDuration="34.771667695s" podCreationTimestamp="2025-12-02 20:22:49 +0000 UTC" firstStartedPulling="2025-12-02 20:23:16.534411082 +0000 UTC m=+679.537786616" lastFinishedPulling="2025-12-02 20:23:22.842699689 +0000 UTC m=+685.846075223" observedRunningTime="2025-12-02 20:23:23.770022065 +0000 UTC m=+686.773397599" watchObservedRunningTime="2025-12-02 20:23:23.771667695 +0000 UTC m=+686.775043229" Dec 02 20:23:24 crc kubenswrapper[4796]: I1202 20:23:24.730830 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" event={"ID":"87139126-df78-43da-984c-e8f633bc52a9","Type":"ContainerStarted","Data":"b07678f741f08adeff02b48db6236db46081f3944ca2d0d6bc374e1e371c4723"} Dec 02 20:23:24 crc kubenswrapper[4796]: I1202 20:23:24.732419 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:23:24 crc kubenswrapper[4796]: I1202 20:23:24.768736 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" podStartSLOduration=33.414570065 podStartE2EDuration="35.768709641s" podCreationTimestamp="2025-12-02 20:22:49 +0000 UTC" firstStartedPulling="2025-12-02 20:23:22.150488029 +0000 UTC m=+685.153863563" lastFinishedPulling="2025-12-02 20:23:24.504627605 +0000 UTC m=+687.508003139" observedRunningTime="2025-12-02 20:23:24.761816222 +0000 UTC m=+687.765191766" watchObservedRunningTime="2025-12-02 20:23:24.768709641 +0000 UTC m=+687.772085175" Dec 02 20:23:29 crc kubenswrapper[4796]: I1202 20:23:29.765708 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" event={"ID":"e3d237b8-9c00-4ed9-b441-822ba51a7ed5","Type":"ContainerStarted","Data":"95e970e455004316bfc89dc068572fd81c7ed9f8c483b6923c4328f9ef53ebca"} Dec 02 20:23:29 crc kubenswrapper[4796]: I1202 20:23:29.766564 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:23:29 crc kubenswrapper[4796]: I1202 20:23:29.788064 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" podStartSLOduration=29.256780362 podStartE2EDuration="40.788033613s" podCreationTimestamp="2025-12-02 20:22:49 +0000 UTC" firstStartedPulling="2025-12-02 20:23:17.507972872 +0000 UTC m=+680.511348416" lastFinishedPulling="2025-12-02 20:23:29.039226133 +0000 UTC m=+692.042601667" observedRunningTime="2025-12-02 20:23:29.783565163 +0000 UTC m=+692.786940697" watchObservedRunningTime="2025-12-02 20:23:29.788033613 +0000 UTC m=+692.791409147" Dec 02 20:23:29 crc kubenswrapper[4796]: I1202 20:23:29.840015 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-dd8nq" Dec 02 20:23:30 crc kubenswrapper[4796]: I1202 20:23:30.017301 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-ww4jg" Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.486110 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7"] Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.488504 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.491199 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.504775 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7"] Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.629280 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdf39aad-b46e-4e58-afad-530ece05f9ad-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7\" (UID: \"fdf39aad-b46e-4e58-afad-530ece05f9ad\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.629604 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdf39aad-b46e-4e58-afad-530ece05f9ad-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7\" (UID: \"fdf39aad-b46e-4e58-afad-530ece05f9ad\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.630007 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4dxx\" (UniqueName: \"kubernetes.io/projected/fdf39aad-b46e-4e58-afad-530ece05f9ad-kube-api-access-c4dxx\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7\" (UID: \"fdf39aad-b46e-4e58-afad-530ece05f9ad\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.732122 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdf39aad-b46e-4e58-afad-530ece05f9ad-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7\" (UID: \"fdf39aad-b46e-4e58-afad-530ece05f9ad\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.732219 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4dxx\" (UniqueName: \"kubernetes.io/projected/fdf39aad-b46e-4e58-afad-530ece05f9ad-kube-api-access-c4dxx\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7\" (UID: \"fdf39aad-b46e-4e58-afad-530ece05f9ad\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.732271 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdf39aad-b46e-4e58-afad-530ece05f9ad-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7\" (UID: \"fdf39aad-b46e-4e58-afad-530ece05f9ad\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.732903 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdf39aad-b46e-4e58-afad-530ece05f9ad-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7\" (UID: \"fdf39aad-b46e-4e58-afad-530ece05f9ad\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.733382 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdf39aad-b46e-4e58-afad-530ece05f9ad-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7\" (UID: \"fdf39aad-b46e-4e58-afad-530ece05f9ad\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.757332 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4dxx\" (UniqueName: \"kubernetes.io/projected/fdf39aad-b46e-4e58-afad-530ece05f9ad-kube-api-access-c4dxx\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7\" (UID: \"fdf39aad-b46e-4e58-afad-530ece05f9ad\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:39 crc kubenswrapper[4796]: I1202 20:23:39.811408 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:40 crc kubenswrapper[4796]: I1202 20:23:40.057641 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7"] Dec 02 20:23:40 crc kubenswrapper[4796]: I1202 20:23:40.839732 4796 generic.go:334] "Generic (PLEG): container finished" podID="fdf39aad-b46e-4e58-afad-530ece05f9ad" containerID="3d6ef5f0e612d9b15b7bd15aedf8bfc670923187d8871145210ec7d6a7e269d8" exitCode=0 Dec 02 20:23:40 crc kubenswrapper[4796]: I1202 20:23:40.839993 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" event={"ID":"fdf39aad-b46e-4e58-afad-530ece05f9ad","Type":"ContainerDied","Data":"3d6ef5f0e612d9b15b7bd15aedf8bfc670923187d8871145210ec7d6a7e269d8"} Dec 02 20:23:40 crc kubenswrapper[4796]: I1202 20:23:40.840341 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" event={"ID":"fdf39aad-b46e-4e58-afad-530ece05f9ad","Type":"ContainerStarted","Data":"2df0746cdb288dc76e35303b76c719ac289f1c38a2ece011c5030aeed031df9a"} Dec 02 20:23:42 crc kubenswrapper[4796]: I1202 20:23:42.861380 4796 generic.go:334] "Generic (PLEG): container finished" podID="fdf39aad-b46e-4e58-afad-530ece05f9ad" containerID="a6eca34994c54380cf503343c41fc8e46838837435b2cf1780af195e3b8babfd" exitCode=0 Dec 02 20:23:42 crc kubenswrapper[4796]: I1202 20:23:42.861453 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" event={"ID":"fdf39aad-b46e-4e58-afad-530ece05f9ad","Type":"ContainerDied","Data":"a6eca34994c54380cf503343c41fc8e46838837435b2cf1780af195e3b8babfd"} Dec 02 20:23:43 crc kubenswrapper[4796]: I1202 20:23:43.872295 4796 generic.go:334] "Generic (PLEG): container finished" podID="fdf39aad-b46e-4e58-afad-530ece05f9ad" containerID="689ee7b13fd136004ead71eca0fe8d192ab28e73962b87b232c6a5f5e2685837" exitCode=0 Dec 02 20:23:43 crc kubenswrapper[4796]: I1202 20:23:43.872378 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" event={"ID":"fdf39aad-b46e-4e58-afad-530ece05f9ad","Type":"ContainerDied","Data":"689ee7b13fd136004ead71eca0fe8d192ab28e73962b87b232c6a5f5e2685837"} Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.213805 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.319447 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdf39aad-b46e-4e58-afad-530ece05f9ad-bundle\") pod \"fdf39aad-b46e-4e58-afad-530ece05f9ad\" (UID: \"fdf39aad-b46e-4e58-afad-530ece05f9ad\") " Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.319555 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdf39aad-b46e-4e58-afad-530ece05f9ad-util\") pod \"fdf39aad-b46e-4e58-afad-530ece05f9ad\" (UID: \"fdf39aad-b46e-4e58-afad-530ece05f9ad\") " Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.319598 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4dxx\" (UniqueName: \"kubernetes.io/projected/fdf39aad-b46e-4e58-afad-530ece05f9ad-kube-api-access-c4dxx\") pod \"fdf39aad-b46e-4e58-afad-530ece05f9ad\" (UID: \"fdf39aad-b46e-4e58-afad-530ece05f9ad\") " Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.320667 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdf39aad-b46e-4e58-afad-530ece05f9ad-bundle" (OuterVolumeSpecName: "bundle") pod "fdf39aad-b46e-4e58-afad-530ece05f9ad" (UID: "fdf39aad-b46e-4e58-afad-530ece05f9ad"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.329090 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf39aad-b46e-4e58-afad-530ece05f9ad-kube-api-access-c4dxx" (OuterVolumeSpecName: "kube-api-access-c4dxx") pod "fdf39aad-b46e-4e58-afad-530ece05f9ad" (UID: "fdf39aad-b46e-4e58-afad-530ece05f9ad"). InnerVolumeSpecName "kube-api-access-c4dxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.338082 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdf39aad-b46e-4e58-afad-530ece05f9ad-util" (OuterVolumeSpecName: "util") pod "fdf39aad-b46e-4e58-afad-530ece05f9ad" (UID: "fdf39aad-b46e-4e58-afad-530ece05f9ad"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.421532 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdf39aad-b46e-4e58-afad-530ece05f9ad-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.421601 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdf39aad-b46e-4e58-afad-530ece05f9ad-util\") on node \"crc\" DevicePath \"\"" Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.421620 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4dxx\" (UniqueName: \"kubernetes.io/projected/fdf39aad-b46e-4e58-afad-530ece05f9ad-kube-api-access-c4dxx\") on node \"crc\" DevicePath \"\"" Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.914124 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" event={"ID":"fdf39aad-b46e-4e58-afad-530ece05f9ad","Type":"ContainerDied","Data":"2df0746cdb288dc76e35303b76c719ac289f1c38a2ece011c5030aeed031df9a"} Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.914186 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2df0746cdb288dc76e35303b76c719ac289f1c38a2ece011c5030aeed031df9a" Dec 02 20:23:45 crc kubenswrapper[4796]: I1202 20:23:45.914219 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.172310 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-clznf"] Dec 02 20:23:51 crc kubenswrapper[4796]: E1202 20:23:51.173198 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf39aad-b46e-4e58-afad-530ece05f9ad" containerName="pull" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.173216 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf39aad-b46e-4e58-afad-530ece05f9ad" containerName="pull" Dec 02 20:23:51 crc kubenswrapper[4796]: E1202 20:23:51.173237 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf39aad-b46e-4e58-afad-530ece05f9ad" containerName="util" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.173245 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf39aad-b46e-4e58-afad-530ece05f9ad" containerName="util" Dec 02 20:23:51 crc kubenswrapper[4796]: E1202 20:23:51.173281 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf39aad-b46e-4e58-afad-530ece05f9ad" containerName="extract" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.173290 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf39aad-b46e-4e58-afad-530ece05f9ad" containerName="extract" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.173412 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf39aad-b46e-4e58-afad-530ece05f9ad" containerName="extract" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.173895 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-clznf" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.177086 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.177648 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.180078 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zvdvk" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.194577 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-clznf"] Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.305154 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6jrj\" (UniqueName: \"kubernetes.io/projected/faa51f2e-e5a4-4e9a-a808-2740d0511d04-kube-api-access-m6jrj\") pod \"nmstate-operator-5b5b58f5c8-clznf\" (UID: \"faa51f2e-e5a4-4e9a-a808-2740d0511d04\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-clznf" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.407091 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6jrj\" (UniqueName: \"kubernetes.io/projected/faa51f2e-e5a4-4e9a-a808-2740d0511d04-kube-api-access-m6jrj\") pod \"nmstate-operator-5b5b58f5c8-clznf\" (UID: \"faa51f2e-e5a4-4e9a-a808-2740d0511d04\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-clznf" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.431438 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6jrj\" (UniqueName: \"kubernetes.io/projected/faa51f2e-e5a4-4e9a-a808-2740d0511d04-kube-api-access-m6jrj\") pod \"nmstate-operator-5b5b58f5c8-clznf\" (UID: \"faa51f2e-e5a4-4e9a-a808-2740d0511d04\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-clznf" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.494813 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-clznf" Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.787895 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-clznf"] Dec 02 20:23:51 crc kubenswrapper[4796]: I1202 20:23:51.967913 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-clznf" event={"ID":"faa51f2e-e5a4-4e9a-a808-2740d0511d04","Type":"ContainerStarted","Data":"f03e83c0fb20517936fd679da351cc4bfd25ac24c1945eecad436c8ad23e647e"} Dec 02 20:23:54 crc kubenswrapper[4796]: I1202 20:23:54.998495 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-clznf" event={"ID":"faa51f2e-e5a4-4e9a-a808-2740d0511d04","Type":"ContainerStarted","Data":"0c0c8d1c16f1fa8225cb2cfd5014c5ed96fcfd04bc187ba7f2df008b1ad9e1f9"} Dec 02 20:23:55 crc kubenswrapper[4796]: I1202 20:23:55.038148 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-clznf" podStartSLOduration=1.576705153 podStartE2EDuration="4.038079018s" podCreationTimestamp="2025-12-02 20:23:51 +0000 UTC" firstStartedPulling="2025-12-02 20:23:51.797971104 +0000 UTC m=+714.801346638" lastFinishedPulling="2025-12-02 20:23:54.259344959 +0000 UTC m=+717.262720503" observedRunningTime="2025-12-02 20:23:55.027225653 +0000 UTC m=+718.030601227" watchObservedRunningTime="2025-12-02 20:23:55.038079018 +0000 UTC m=+718.041454582" Dec 02 20:23:55 crc kubenswrapper[4796]: I1202 20:23:55.189193 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:23:55 crc kubenswrapper[4796]: I1202 20:23:55.189285 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.584422 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-bf7zw"] Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.586061 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-bf7zw" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.588201 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-lpzhh" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.604136 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp"] Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.605531 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.608340 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.608908 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-bf7zw"] Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.627828 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp"] Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.642992 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7h74f"] Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.644109 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.743883 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2"] Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.744918 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.747552 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2"] Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.747921 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.747943 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-qppgt" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.757721 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.758128 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d870477b-675b-429c-9d60-0b482dcf4996-dbus-socket\") pod \"nmstate-handler-7h74f\" (UID: \"d870477b-675b-429c-9d60-0b482dcf4996\") " pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.758180 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d870477b-675b-429c-9d60-0b482dcf4996-nmstate-lock\") pod \"nmstate-handler-7h74f\" (UID: \"d870477b-675b-429c-9d60-0b482dcf4996\") " pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.758217 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/02b629d9-8fcd-4caf-95d7-b0cb92a1a76f-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-xnjvp\" (UID: \"02b629d9-8fcd-4caf-95d7-b0cb92a1a76f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.758306 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d870477b-675b-429c-9d60-0b482dcf4996-ovs-socket\") pod \"nmstate-handler-7h74f\" (UID: \"d870477b-675b-429c-9d60-0b482dcf4996\") " pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.758340 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2xvk\" (UniqueName: \"kubernetes.io/projected/d870477b-675b-429c-9d60-0b482dcf4996-kube-api-access-p2xvk\") pod \"nmstate-handler-7h74f\" (UID: \"d870477b-675b-429c-9d60-0b482dcf4996\") " pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.758373 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbt4d\" (UniqueName: \"kubernetes.io/projected/6a9c5307-34c1-408b-87d4-b9d005660199-kube-api-access-hbt4d\") pod \"nmstate-metrics-7f946cbc9-bf7zw\" (UID: \"6a9c5307-34c1-408b-87d4-b9d005660199\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-bf7zw" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.758482 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt57w\" (UniqueName: \"kubernetes.io/projected/02b629d9-8fcd-4caf-95d7-b0cb92a1a76f-kube-api-access-dt57w\") pod \"nmstate-webhook-5f6d4c5ccb-xnjvp\" (UID: \"02b629d9-8fcd-4caf-95d7-b0cb92a1a76f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.859979 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/02b629d9-8fcd-4caf-95d7-b0cb92a1a76f-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-xnjvp\" (UID: \"02b629d9-8fcd-4caf-95d7-b0cb92a1a76f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.860035 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d870477b-675b-429c-9d60-0b482dcf4996-ovs-socket\") pod \"nmstate-handler-7h74f\" (UID: \"d870477b-675b-429c-9d60-0b482dcf4996\") " pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.860083 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/27412017-4447-4daa-817e-6bb21c045489-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zbtm2\" (UID: \"27412017-4447-4daa-817e-6bb21c045489\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.860105 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld669\" (UniqueName: \"kubernetes.io/projected/27412017-4447-4daa-817e-6bb21c045489-kube-api-access-ld669\") pod \"nmstate-console-plugin-7fbb5f6569-zbtm2\" (UID: \"27412017-4447-4daa-817e-6bb21c045489\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.860128 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2xvk\" (UniqueName: \"kubernetes.io/projected/d870477b-675b-429c-9d60-0b482dcf4996-kube-api-access-p2xvk\") pod \"nmstate-handler-7h74f\" (UID: \"d870477b-675b-429c-9d60-0b482dcf4996\") " pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.860156 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/27412017-4447-4daa-817e-6bb21c045489-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-zbtm2\" (UID: \"27412017-4447-4daa-817e-6bb21c045489\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.860185 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbt4d\" (UniqueName: \"kubernetes.io/projected/6a9c5307-34c1-408b-87d4-b9d005660199-kube-api-access-hbt4d\") pod \"nmstate-metrics-7f946cbc9-bf7zw\" (UID: \"6a9c5307-34c1-408b-87d4-b9d005660199\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-bf7zw" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.860246 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d870477b-675b-429c-9d60-0b482dcf4996-ovs-socket\") pod \"nmstate-handler-7h74f\" (UID: \"d870477b-675b-429c-9d60-0b482dcf4996\") " pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.860311 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt57w\" (UniqueName: \"kubernetes.io/projected/02b629d9-8fcd-4caf-95d7-b0cb92a1a76f-kube-api-access-dt57w\") pod \"nmstate-webhook-5f6d4c5ccb-xnjvp\" (UID: \"02b629d9-8fcd-4caf-95d7-b0cb92a1a76f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.860489 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d870477b-675b-429c-9d60-0b482dcf4996-dbus-socket\") pod \"nmstate-handler-7h74f\" (UID: \"d870477b-675b-429c-9d60-0b482dcf4996\") " pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.860554 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d870477b-675b-429c-9d60-0b482dcf4996-nmstate-lock\") pod \"nmstate-handler-7h74f\" (UID: \"d870477b-675b-429c-9d60-0b482dcf4996\") " pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.860673 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d870477b-675b-429c-9d60-0b482dcf4996-nmstate-lock\") pod \"nmstate-handler-7h74f\" (UID: \"d870477b-675b-429c-9d60-0b482dcf4996\") " pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.860854 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d870477b-675b-429c-9d60-0b482dcf4996-dbus-socket\") pod \"nmstate-handler-7h74f\" (UID: \"d870477b-675b-429c-9d60-0b482dcf4996\") " pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.873624 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/02b629d9-8fcd-4caf-95d7-b0cb92a1a76f-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-xnjvp\" (UID: \"02b629d9-8fcd-4caf-95d7-b0cb92a1a76f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.877795 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2xvk\" (UniqueName: \"kubernetes.io/projected/d870477b-675b-429c-9d60-0b482dcf4996-kube-api-access-p2xvk\") pod \"nmstate-handler-7h74f\" (UID: \"d870477b-675b-429c-9d60-0b482dcf4996\") " pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.897155 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt57w\" (UniqueName: \"kubernetes.io/projected/02b629d9-8fcd-4caf-95d7-b0cb92a1a76f-kube-api-access-dt57w\") pod \"nmstate-webhook-5f6d4c5ccb-xnjvp\" (UID: \"02b629d9-8fcd-4caf-95d7-b0cb92a1a76f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.903686 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbt4d\" (UniqueName: \"kubernetes.io/projected/6a9c5307-34c1-408b-87d4-b9d005660199-kube-api-access-hbt4d\") pod \"nmstate-metrics-7f946cbc9-bf7zw\" (UID: \"6a9c5307-34c1-408b-87d4-b9d005660199\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-bf7zw" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.904117 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-bf7zw" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.926987 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.952775 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57c46bb884-zwq4z"] Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.953566 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.961610 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/27412017-4447-4daa-817e-6bb21c045489-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zbtm2\" (UID: \"27412017-4447-4daa-817e-6bb21c045489\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.961652 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld669\" (UniqueName: \"kubernetes.io/projected/27412017-4447-4daa-817e-6bb21c045489-kube-api-access-ld669\") pod \"nmstate-console-plugin-7fbb5f6569-zbtm2\" (UID: \"27412017-4447-4daa-817e-6bb21c045489\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.961679 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/27412017-4447-4daa-817e-6bb21c045489-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-zbtm2\" (UID: \"27412017-4447-4daa-817e-6bb21c045489\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" Dec 02 20:24:00 crc kubenswrapper[4796]: E1202 20:24:00.961773 4796 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 02 20:24:00 crc kubenswrapper[4796]: E1202 20:24:00.961829 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27412017-4447-4daa-817e-6bb21c045489-plugin-serving-cert podName:27412017-4447-4daa-817e-6bb21c045489 nodeName:}" failed. No retries permitted until 2025-12-02 20:24:01.461811802 +0000 UTC m=+724.465187336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/27412017-4447-4daa-817e-6bb21c045489-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-zbtm2" (UID: "27412017-4447-4daa-817e-6bb21c045489") : secret "plugin-serving-cert" not found Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.962565 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/27412017-4447-4daa-817e-6bb21c045489-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-zbtm2\" (UID: \"27412017-4447-4daa-817e-6bb21c045489\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.967239 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57c46bb884-zwq4z"] Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.975311 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:00 crc kubenswrapper[4796]: I1202 20:24:00.986304 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld669\" (UniqueName: \"kubernetes.io/projected/27412017-4447-4daa-817e-6bb21c045489-kube-api-access-ld669\") pod \"nmstate-console-plugin-7fbb5f6569-zbtm2\" (UID: \"27412017-4447-4daa-817e-6bb21c045489\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.046395 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7h74f" event={"ID":"d870477b-675b-429c-9d60-0b482dcf4996","Type":"ContainerStarted","Data":"c45713b0eb097d26a8fa215acc6aba2d90f0bd56ce89e670efada5389745e845"} Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.062957 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkddz\" (UniqueName: \"kubernetes.io/projected/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-kube-api-access-kkddz\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.063011 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-oauth-config\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.063053 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-service-ca\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.063091 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-serving-cert\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.063114 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-oauth-serving-cert\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.063131 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-trusted-ca-bundle\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.063150 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-config\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.164234 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-serving-cert\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.164363 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-oauth-serving-cert\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.164393 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-trusted-ca-bundle\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.164418 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-config\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.164482 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkddz\" (UniqueName: \"kubernetes.io/projected/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-kube-api-access-kkddz\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.164520 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-oauth-config\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.164569 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-service-ca\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.167023 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-oauth-serving-cert\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.168388 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-trusted-ca-bundle\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.173014 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-service-ca\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.174658 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-oauth-config\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.174877 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-serving-cert\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.182772 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-config\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.190125 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkddz\" (UniqueName: \"kubernetes.io/projected/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-kube-api-access-kkddz\") pod \"console-57c46bb884-zwq4z\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.195587 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-bf7zw"] Dec 02 20:24:01 crc kubenswrapper[4796]: W1202 20:24:01.197003 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9c5307_34c1_408b_87d4_b9d005660199.slice/crio-3ad6cd30cc4ca7a91fe278ebd9c3da0713bc7a89633d79893f6e9cc2b131db15 WatchSource:0}: Error finding container 3ad6cd30cc4ca7a91fe278ebd9c3da0713bc7a89633d79893f6e9cc2b131db15: Status 404 returned error can't find the container with id 3ad6cd30cc4ca7a91fe278ebd9c3da0713bc7a89633d79893f6e9cc2b131db15 Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.293648 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.469937 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/27412017-4447-4daa-817e-6bb21c045489-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zbtm2\" (UID: \"27412017-4447-4daa-817e-6bb21c045489\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.475499 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/27412017-4447-4daa-817e-6bb21c045489-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zbtm2\" (UID: \"27412017-4447-4daa-817e-6bb21c045489\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.476662 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp"] Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.547656 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57c46bb884-zwq4z"] Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.672747 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" Dec 02 20:24:01 crc kubenswrapper[4796]: I1202 20:24:01.965775 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2"] Dec 02 20:24:01 crc kubenswrapper[4796]: W1202 20:24:01.980490 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27412017_4447_4daa_817e_6bb21c045489.slice/crio-57d1142cb1174b66fe68df89837b559b833deda3785c072742e6470942ea127d WatchSource:0}: Error finding container 57d1142cb1174b66fe68df89837b559b833deda3785c072742e6470942ea127d: Status 404 returned error can't find the container with id 57d1142cb1174b66fe68df89837b559b833deda3785c072742e6470942ea127d Dec 02 20:24:02 crc kubenswrapper[4796]: I1202 20:24:02.057017 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-bf7zw" event={"ID":"6a9c5307-34c1-408b-87d4-b9d005660199","Type":"ContainerStarted","Data":"3ad6cd30cc4ca7a91fe278ebd9c3da0713bc7a89633d79893f6e9cc2b131db15"} Dec 02 20:24:02 crc kubenswrapper[4796]: I1202 20:24:02.058585 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" event={"ID":"02b629d9-8fcd-4caf-95d7-b0cb92a1a76f","Type":"ContainerStarted","Data":"ee12de4fc50f5aebffccbdc3c9a0b380e4282137622622a354cdd54106aeb5d4"} Dec 02 20:24:02 crc kubenswrapper[4796]: I1202 20:24:02.059695 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" event={"ID":"27412017-4447-4daa-817e-6bb21c045489","Type":"ContainerStarted","Data":"57d1142cb1174b66fe68df89837b559b833deda3785c072742e6470942ea127d"} Dec 02 20:24:02 crc kubenswrapper[4796]: I1202 20:24:02.060824 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57c46bb884-zwq4z" event={"ID":"bcaddfd6-aacd-4eae-a723-2837f69c9ecd","Type":"ContainerStarted","Data":"0c4e555e4383ec3729526717449c54e6b54aee1c216f0b780e017539c0a55124"} Dec 02 20:24:03 crc kubenswrapper[4796]: I1202 20:24:03.071640 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57c46bb884-zwq4z" event={"ID":"bcaddfd6-aacd-4eae-a723-2837f69c9ecd","Type":"ContainerStarted","Data":"f9fc04b95847febce56605560a410b199af24c474279dcc444da0ab1b5ca491b"} Dec 02 20:24:03 crc kubenswrapper[4796]: I1202 20:24:03.096963 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57c46bb884-zwq4z" podStartSLOduration=3.096938161 podStartE2EDuration="3.096938161s" podCreationTimestamp="2025-12-02 20:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:24:03.090635938 +0000 UTC m=+726.094011492" watchObservedRunningTime="2025-12-02 20:24:03.096938161 +0000 UTC m=+726.100313695" Dec 02 20:24:04 crc kubenswrapper[4796]: I1202 20:24:04.080582 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" event={"ID":"02b629d9-8fcd-4caf-95d7-b0cb92a1a76f","Type":"ContainerStarted","Data":"9121e94e52da561a4ad61ae54122d8f1c6a2fd9b7c2b7df46c8cf52088a5027f"} Dec 02 20:24:04 crc kubenswrapper[4796]: I1202 20:24:04.081185 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" Dec 02 20:24:04 crc kubenswrapper[4796]: I1202 20:24:04.083914 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7h74f" event={"ID":"d870477b-675b-429c-9d60-0b482dcf4996","Type":"ContainerStarted","Data":"54c3e50fa8b492776775076ba0317d270c858ae13075976bdb1c3af2fc3d9a6a"} Dec 02 20:24:04 crc kubenswrapper[4796]: I1202 20:24:04.084474 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:04 crc kubenswrapper[4796]: I1202 20:24:04.087050 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-bf7zw" event={"ID":"6a9c5307-34c1-408b-87d4-b9d005660199","Type":"ContainerStarted","Data":"78571a66a07915bf73b0cdc75db7e167c5ce8521139369d85dfa76bdfa88b87a"} Dec 02 20:24:04 crc kubenswrapper[4796]: I1202 20:24:04.103090 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" podStartSLOduration=1.822908835 podStartE2EDuration="4.10306067s" podCreationTimestamp="2025-12-02 20:24:00 +0000 UTC" firstStartedPulling="2025-12-02 20:24:01.488426147 +0000 UTC m=+724.491801691" lastFinishedPulling="2025-12-02 20:24:03.768577952 +0000 UTC m=+726.771953526" observedRunningTime="2025-12-02 20:24:04.09896995 +0000 UTC m=+727.102345504" watchObservedRunningTime="2025-12-02 20:24:04.10306067 +0000 UTC m=+727.106436204" Dec 02 20:24:04 crc kubenswrapper[4796]: I1202 20:24:04.123001 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7h74f" podStartSLOduration=1.419788109 podStartE2EDuration="4.122979916s" podCreationTimestamp="2025-12-02 20:24:00 +0000 UTC" firstStartedPulling="2025-12-02 20:24:01.035080514 +0000 UTC m=+724.038456048" lastFinishedPulling="2025-12-02 20:24:03.738272321 +0000 UTC m=+726.741647855" observedRunningTime="2025-12-02 20:24:04.120540666 +0000 UTC m=+727.123916210" watchObservedRunningTime="2025-12-02 20:24:04.122979916 +0000 UTC m=+727.126355450" Dec 02 20:24:06 crc kubenswrapper[4796]: I1202 20:24:06.100999 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" event={"ID":"27412017-4447-4daa-817e-6bb21c045489","Type":"ContainerStarted","Data":"e7868e90fc8526e2144e50d684f574e4879165b2efe9554eab6efdb610022a32"} Dec 02 20:24:06 crc kubenswrapper[4796]: I1202 20:24:06.128019 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zbtm2" podStartSLOduration=3.116066519 podStartE2EDuration="6.127988585s" podCreationTimestamp="2025-12-02 20:24:00 +0000 UTC" firstStartedPulling="2025-12-02 20:24:01.984793152 +0000 UTC m=+724.988168706" lastFinishedPulling="2025-12-02 20:24:04.996715228 +0000 UTC m=+728.000090772" observedRunningTime="2025-12-02 20:24:06.124787297 +0000 UTC m=+729.128162871" watchObservedRunningTime="2025-12-02 20:24:06.127988585 +0000 UTC m=+729.131364149" Dec 02 20:24:07 crc kubenswrapper[4796]: I1202 20:24:07.112506 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-bf7zw" event={"ID":"6a9c5307-34c1-408b-87d4-b9d005660199","Type":"ContainerStarted","Data":"52dc33e3cd16cfe093490a3436ccb64a8bae21b752760fa3d0c74544f866a438"} Dec 02 20:24:07 crc kubenswrapper[4796]: I1202 20:24:07.150914 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-bf7zw" podStartSLOduration=2.050048978 podStartE2EDuration="7.150881663s" podCreationTimestamp="2025-12-02 20:24:00 +0000 UTC" firstStartedPulling="2025-12-02 20:24:01.200247152 +0000 UTC m=+724.203622686" lastFinishedPulling="2025-12-02 20:24:06.301079837 +0000 UTC m=+729.304455371" observedRunningTime="2025-12-02 20:24:07.139009692 +0000 UTC m=+730.142385256" watchObservedRunningTime="2025-12-02 20:24:07.150881663 +0000 UTC m=+730.154257227" Dec 02 20:24:11 crc kubenswrapper[4796]: I1202 20:24:11.029086 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7h74f" Dec 02 20:24:11 crc kubenswrapper[4796]: I1202 20:24:11.294764 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:11 crc kubenswrapper[4796]: I1202 20:24:11.295071 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:11 crc kubenswrapper[4796]: I1202 20:24:11.303976 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:12 crc kubenswrapper[4796]: I1202 20:24:12.158719 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:24:12 crc kubenswrapper[4796]: I1202 20:24:12.244824 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-js54s"] Dec 02 20:24:20 crc kubenswrapper[4796]: I1202 20:24:20.936065 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjvp" Dec 02 20:24:25 crc kubenswrapper[4796]: I1202 20:24:25.189667 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:24:25 crc kubenswrapper[4796]: I1202 20:24:25.190221 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:24:37 crc kubenswrapper[4796]: I1202 20:24:37.302243 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-js54s" podUID="8e53767d-5052-4220-9645-b8d6d433a7df" containerName="console" containerID="cri-o://f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d" gracePeriod=15 Dec 02 20:24:37 crc kubenswrapper[4796]: I1202 20:24:37.463823 4796 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.186729 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-js54s_8e53767d-5052-4220-9645-b8d6d433a7df/console/0.log" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.187106 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.331601 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-console-config\") pod \"8e53767d-5052-4220-9645-b8d6d433a7df\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.331678 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-service-ca\") pod \"8e53767d-5052-4220-9645-b8d6d433a7df\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.331923 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-oauth-serving-cert\") pod \"8e53767d-5052-4220-9645-b8d6d433a7df\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.331991 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e53767d-5052-4220-9645-b8d6d433a7df-console-oauth-config\") pod \"8e53767d-5052-4220-9645-b8d6d433a7df\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.332009 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-trusted-ca-bundle\") pod \"8e53767d-5052-4220-9645-b8d6d433a7df\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.332077 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxh5t\" (UniqueName: \"kubernetes.io/projected/8e53767d-5052-4220-9645-b8d6d433a7df-kube-api-access-fxh5t\") pod \"8e53767d-5052-4220-9645-b8d6d433a7df\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.332100 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e53767d-5052-4220-9645-b8d6d433a7df-console-serving-cert\") pod \"8e53767d-5052-4220-9645-b8d6d433a7df\" (UID: \"8e53767d-5052-4220-9645-b8d6d433a7df\") " Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.332424 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-console-config" (OuterVolumeSpecName: "console-config") pod "8e53767d-5052-4220-9645-b8d6d433a7df" (UID: "8e53767d-5052-4220-9645-b8d6d433a7df"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.332448 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8e53767d-5052-4220-9645-b8d6d433a7df" (UID: "8e53767d-5052-4220-9645-b8d6d433a7df"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.332601 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8e53767d-5052-4220-9645-b8d6d433a7df" (UID: "8e53767d-5052-4220-9645-b8d6d433a7df"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.332750 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-service-ca" (OuterVolumeSpecName: "service-ca") pod "8e53767d-5052-4220-9645-b8d6d433a7df" (UID: "8e53767d-5052-4220-9645-b8d6d433a7df"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.333024 4796 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.333042 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.333050 4796 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.333059 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e53767d-5052-4220-9645-b8d6d433a7df-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.337682 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e53767d-5052-4220-9645-b8d6d433a7df-kube-api-access-fxh5t" (OuterVolumeSpecName: "kube-api-access-fxh5t") pod "8e53767d-5052-4220-9645-b8d6d433a7df" (UID: "8e53767d-5052-4220-9645-b8d6d433a7df"). InnerVolumeSpecName "kube-api-access-fxh5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.337726 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e53767d-5052-4220-9645-b8d6d433a7df-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8e53767d-5052-4220-9645-b8d6d433a7df" (UID: "8e53767d-5052-4220-9645-b8d6d433a7df"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.338004 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e53767d-5052-4220-9645-b8d6d433a7df-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8e53767d-5052-4220-9645-b8d6d433a7df" (UID: "8e53767d-5052-4220-9645-b8d6d433a7df"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.416658 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-js54s_8e53767d-5052-4220-9645-b8d6d433a7df/console/0.log" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.416707 4796 generic.go:334] "Generic (PLEG): container finished" podID="8e53767d-5052-4220-9645-b8d6d433a7df" containerID="f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d" exitCode=2 Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.416734 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-js54s" event={"ID":"8e53767d-5052-4220-9645-b8d6d433a7df","Type":"ContainerDied","Data":"f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d"} Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.416761 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-js54s" event={"ID":"8e53767d-5052-4220-9645-b8d6d433a7df","Type":"ContainerDied","Data":"6f6c72f9c32b8e4fad7af62772284c4b9b4a993f673735275a3efa30ed1553f7"} Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.416777 4796 scope.go:117] "RemoveContainer" containerID="f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.416817 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-js54s" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.432481 4796 scope.go:117] "RemoveContainer" containerID="f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d" Dec 02 20:24:38 crc kubenswrapper[4796]: E1202 20:24:38.432829 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d\": container with ID starting with f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d not found: ID does not exist" containerID="f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.432859 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d"} err="failed to get container status \"f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d\": rpc error: code = NotFound desc = could not find container \"f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d\": container with ID starting with f22af3f9e8d83b6931c68215c883662d6e7a1d423a1c80e767979c1bec3b735d not found: ID does not exist" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.433897 4796 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e53767d-5052-4220-9645-b8d6d433a7df-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.433917 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxh5t\" (UniqueName: \"kubernetes.io/projected/8e53767d-5052-4220-9645-b8d6d433a7df-kube-api-access-fxh5t\") on node \"crc\" DevicePath \"\"" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.433928 4796 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e53767d-5052-4220-9645-b8d6d433a7df-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.449505 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-js54s"] Dec 02 20:24:38 crc kubenswrapper[4796]: I1202 20:24:38.454846 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-js54s"] Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.107697 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp"] Dec 02 20:24:39 crc kubenswrapper[4796]: E1202 20:24:39.107986 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53767d-5052-4220-9645-b8d6d433a7df" containerName="console" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.108002 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53767d-5052-4220-9645-b8d6d433a7df" containerName="console" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.108141 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53767d-5052-4220-9645-b8d6d433a7df" containerName="console" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.109125 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.112051 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.120566 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp"] Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.142007 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f40dddc-3877-418d-8f01-9c1ac187cccf-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp\" (UID: \"9f40dddc-3877-418d-8f01-9c1ac187cccf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.142143 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xxpw\" (UniqueName: \"kubernetes.io/projected/9f40dddc-3877-418d-8f01-9c1ac187cccf-kube-api-access-2xxpw\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp\" (UID: \"9f40dddc-3877-418d-8f01-9c1ac187cccf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.142279 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f40dddc-3877-418d-8f01-9c1ac187cccf-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp\" (UID: \"9f40dddc-3877-418d-8f01-9c1ac187cccf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.243478 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xxpw\" (UniqueName: \"kubernetes.io/projected/9f40dddc-3877-418d-8f01-9c1ac187cccf-kube-api-access-2xxpw\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp\" (UID: \"9f40dddc-3877-418d-8f01-9c1ac187cccf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.243696 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f40dddc-3877-418d-8f01-9c1ac187cccf-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp\" (UID: \"9f40dddc-3877-418d-8f01-9c1ac187cccf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.243775 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f40dddc-3877-418d-8f01-9c1ac187cccf-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp\" (UID: \"9f40dddc-3877-418d-8f01-9c1ac187cccf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.244533 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f40dddc-3877-418d-8f01-9c1ac187cccf-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp\" (UID: \"9f40dddc-3877-418d-8f01-9c1ac187cccf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.244890 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f40dddc-3877-418d-8f01-9c1ac187cccf-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp\" (UID: \"9f40dddc-3877-418d-8f01-9c1ac187cccf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.262494 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xxpw\" (UniqueName: \"kubernetes.io/projected/9f40dddc-3877-418d-8f01-9c1ac187cccf-kube-api-access-2xxpw\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp\" (UID: \"9f40dddc-3877-418d-8f01-9c1ac187cccf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.289534 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e53767d-5052-4220-9645-b8d6d433a7df" path="/var/lib/kubelet/pods/8e53767d-5052-4220-9645-b8d6d433a7df/volumes" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.432962 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:39 crc kubenswrapper[4796]: I1202 20:24:39.718622 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp"] Dec 02 20:24:40 crc kubenswrapper[4796]: I1202 20:24:40.434390 4796 generic.go:334] "Generic (PLEG): container finished" podID="9f40dddc-3877-418d-8f01-9c1ac187cccf" containerID="64402d6fdebb145581a3939a16295c75be856a3d0667dac5c5db58d88ce8bdf7" exitCode=0 Dec 02 20:24:40 crc kubenswrapper[4796]: I1202 20:24:40.434468 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" event={"ID":"9f40dddc-3877-418d-8f01-9c1ac187cccf","Type":"ContainerDied","Data":"64402d6fdebb145581a3939a16295c75be856a3d0667dac5c5db58d88ce8bdf7"} Dec 02 20:24:40 crc kubenswrapper[4796]: I1202 20:24:40.434852 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" event={"ID":"9f40dddc-3877-418d-8f01-9c1ac187cccf","Type":"ContainerStarted","Data":"6298ad2da16ebe79e6fbe4b9176e740db1d126b341c002ed853ba9d9997fe042"} Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.462857 4796 generic.go:334] "Generic (PLEG): container finished" podID="9f40dddc-3877-418d-8f01-9c1ac187cccf" containerID="93f3aea12e6cc91d3435e2a47ff90be4b810427f20f018285105a4455e2d8ec6" exitCode=0 Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.462963 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" event={"ID":"9f40dddc-3877-418d-8f01-9c1ac187cccf","Type":"ContainerDied","Data":"93f3aea12e6cc91d3435e2a47ff90be4b810427f20f018285105a4455e2d8ec6"} Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.670233 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wlxs7"] Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.671889 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.691747 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d224fbb7-46b7-4c42-95b0-c05149c50ec1-catalog-content\") pod \"redhat-operators-wlxs7\" (UID: \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\") " pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.691919 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d224fbb7-46b7-4c42-95b0-c05149c50ec1-utilities\") pod \"redhat-operators-wlxs7\" (UID: \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\") " pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.692032 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks7wd\" (UniqueName: \"kubernetes.io/projected/d224fbb7-46b7-4c42-95b0-c05149c50ec1-kube-api-access-ks7wd\") pod \"redhat-operators-wlxs7\" (UID: \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\") " pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.701876 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wlxs7"] Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.792988 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d224fbb7-46b7-4c42-95b0-c05149c50ec1-utilities\") pod \"redhat-operators-wlxs7\" (UID: \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\") " pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.793058 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks7wd\" (UniqueName: \"kubernetes.io/projected/d224fbb7-46b7-4c42-95b0-c05149c50ec1-kube-api-access-ks7wd\") pod \"redhat-operators-wlxs7\" (UID: \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\") " pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.793150 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d224fbb7-46b7-4c42-95b0-c05149c50ec1-catalog-content\") pod \"redhat-operators-wlxs7\" (UID: \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\") " pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.793603 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d224fbb7-46b7-4c42-95b0-c05149c50ec1-utilities\") pod \"redhat-operators-wlxs7\" (UID: \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\") " pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.793691 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d224fbb7-46b7-4c42-95b0-c05149c50ec1-catalog-content\") pod \"redhat-operators-wlxs7\" (UID: \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\") " pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.829344 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks7wd\" (UniqueName: \"kubernetes.io/projected/d224fbb7-46b7-4c42-95b0-c05149c50ec1-kube-api-access-ks7wd\") pod \"redhat-operators-wlxs7\" (UID: \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\") " pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:42 crc kubenswrapper[4796]: I1202 20:24:42.998675 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:43 crc kubenswrapper[4796]: I1202 20:24:43.343146 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wlxs7"] Dec 02 20:24:43 crc kubenswrapper[4796]: I1202 20:24:43.472014 4796 generic.go:334] "Generic (PLEG): container finished" podID="9f40dddc-3877-418d-8f01-9c1ac187cccf" containerID="44ac7b42e02f246f323b0ebee27a205edc0a6386908cc29198ba87444d35243f" exitCode=0 Dec 02 20:24:43 crc kubenswrapper[4796]: I1202 20:24:43.472105 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" event={"ID":"9f40dddc-3877-418d-8f01-9c1ac187cccf","Type":"ContainerDied","Data":"44ac7b42e02f246f323b0ebee27a205edc0a6386908cc29198ba87444d35243f"} Dec 02 20:24:43 crc kubenswrapper[4796]: I1202 20:24:43.473081 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlxs7" event={"ID":"d224fbb7-46b7-4c42-95b0-c05149c50ec1","Type":"ContainerStarted","Data":"1ce7aea3185a78a6296f9132fe6fc2a02fe4870acde89bfd76eaeb10f2c56a8d"} Dec 02 20:24:44 crc kubenswrapper[4796]: I1202 20:24:44.484909 4796 generic.go:334] "Generic (PLEG): container finished" podID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" containerID="6e5c7498cefe50e3511f82d5c5dff9c8435d00b4de72887cd2fc9f00423ea412" exitCode=0 Dec 02 20:24:44 crc kubenswrapper[4796]: I1202 20:24:44.485178 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlxs7" event={"ID":"d224fbb7-46b7-4c42-95b0-c05149c50ec1","Type":"ContainerDied","Data":"6e5c7498cefe50e3511f82d5c5dff9c8435d00b4de72887cd2fc9f00423ea412"} Dec 02 20:24:44 crc kubenswrapper[4796]: I1202 20:24:44.802964 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:44 crc kubenswrapper[4796]: I1202 20:24:44.922382 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f40dddc-3877-418d-8f01-9c1ac187cccf-bundle\") pod \"9f40dddc-3877-418d-8f01-9c1ac187cccf\" (UID: \"9f40dddc-3877-418d-8f01-9c1ac187cccf\") " Dec 02 20:24:44 crc kubenswrapper[4796]: I1202 20:24:44.922459 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xxpw\" (UniqueName: \"kubernetes.io/projected/9f40dddc-3877-418d-8f01-9c1ac187cccf-kube-api-access-2xxpw\") pod \"9f40dddc-3877-418d-8f01-9c1ac187cccf\" (UID: \"9f40dddc-3877-418d-8f01-9c1ac187cccf\") " Dec 02 20:24:44 crc kubenswrapper[4796]: I1202 20:24:44.922527 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f40dddc-3877-418d-8f01-9c1ac187cccf-util\") pod \"9f40dddc-3877-418d-8f01-9c1ac187cccf\" (UID: \"9f40dddc-3877-418d-8f01-9c1ac187cccf\") " Dec 02 20:24:44 crc kubenswrapper[4796]: I1202 20:24:44.924016 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f40dddc-3877-418d-8f01-9c1ac187cccf-bundle" (OuterVolumeSpecName: "bundle") pod "9f40dddc-3877-418d-8f01-9c1ac187cccf" (UID: "9f40dddc-3877-418d-8f01-9c1ac187cccf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:24:44 crc kubenswrapper[4796]: I1202 20:24:44.935431 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f40dddc-3877-418d-8f01-9c1ac187cccf-kube-api-access-2xxpw" (OuterVolumeSpecName: "kube-api-access-2xxpw") pod "9f40dddc-3877-418d-8f01-9c1ac187cccf" (UID: "9f40dddc-3877-418d-8f01-9c1ac187cccf"). InnerVolumeSpecName "kube-api-access-2xxpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:24:44 crc kubenswrapper[4796]: I1202 20:24:44.936889 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f40dddc-3877-418d-8f01-9c1ac187cccf-util" (OuterVolumeSpecName: "util") pod "9f40dddc-3877-418d-8f01-9c1ac187cccf" (UID: "9f40dddc-3877-418d-8f01-9c1ac187cccf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:24:45 crc kubenswrapper[4796]: I1202 20:24:45.023739 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f40dddc-3877-418d-8f01-9c1ac187cccf-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:24:45 crc kubenswrapper[4796]: I1202 20:24:45.023782 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xxpw\" (UniqueName: \"kubernetes.io/projected/9f40dddc-3877-418d-8f01-9c1ac187cccf-kube-api-access-2xxpw\") on node \"crc\" DevicePath \"\"" Dec 02 20:24:45 crc kubenswrapper[4796]: I1202 20:24:45.023793 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f40dddc-3877-418d-8f01-9c1ac187cccf-util\") on node \"crc\" DevicePath \"\"" Dec 02 20:24:45 crc kubenswrapper[4796]: I1202 20:24:45.498951 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" event={"ID":"9f40dddc-3877-418d-8f01-9c1ac187cccf","Type":"ContainerDied","Data":"6298ad2da16ebe79e6fbe4b9176e740db1d126b341c002ed853ba9d9997fe042"} Dec 02 20:24:45 crc kubenswrapper[4796]: I1202 20:24:45.499319 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6298ad2da16ebe79e6fbe4b9176e740db1d126b341c002ed853ba9d9997fe042" Dec 02 20:24:45 crc kubenswrapper[4796]: I1202 20:24:45.499039 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp" Dec 02 20:24:46 crc kubenswrapper[4796]: I1202 20:24:46.511356 4796 generic.go:334] "Generic (PLEG): container finished" podID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" containerID="a8cc53cdf63e88f95b4dab6b5cfd2597bda77d3e37f4f96b340adec3004d8ceb" exitCode=0 Dec 02 20:24:46 crc kubenswrapper[4796]: I1202 20:24:46.511501 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlxs7" event={"ID":"d224fbb7-46b7-4c42-95b0-c05149c50ec1","Type":"ContainerDied","Data":"a8cc53cdf63e88f95b4dab6b5cfd2597bda77d3e37f4f96b340adec3004d8ceb"} Dec 02 20:24:47 crc kubenswrapper[4796]: I1202 20:24:47.538235 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlxs7" event={"ID":"d224fbb7-46b7-4c42-95b0-c05149c50ec1","Type":"ContainerStarted","Data":"ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be"} Dec 02 20:24:47 crc kubenswrapper[4796]: I1202 20:24:47.565269 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wlxs7" podStartSLOduration=3.149428065 podStartE2EDuration="5.565235376s" podCreationTimestamp="2025-12-02 20:24:42 +0000 UTC" firstStartedPulling="2025-12-02 20:24:44.488769713 +0000 UTC m=+767.492145257" lastFinishedPulling="2025-12-02 20:24:46.904577034 +0000 UTC m=+769.907952568" observedRunningTime="2025-12-02 20:24:47.563882893 +0000 UTC m=+770.567258427" watchObservedRunningTime="2025-12-02 20:24:47.565235376 +0000 UTC m=+770.568610910" Dec 02 20:24:53 crc kubenswrapper[4796]: I1202 20:24:52.999515 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:53 crc kubenswrapper[4796]: I1202 20:24:53.000352 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.069698 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wlxs7" podUID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" containerName="registry-server" probeResult="failure" output=< Dec 02 20:24:54 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 02 20:24:54 crc kubenswrapper[4796]: > Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.447912 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm"] Dec 02 20:24:54 crc kubenswrapper[4796]: E1202 20:24:54.448141 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f40dddc-3877-418d-8f01-9c1ac187cccf" containerName="pull" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.448152 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f40dddc-3877-418d-8f01-9c1ac187cccf" containerName="pull" Dec 02 20:24:54 crc kubenswrapper[4796]: E1202 20:24:54.448166 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f40dddc-3877-418d-8f01-9c1ac187cccf" containerName="util" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.448172 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f40dddc-3877-418d-8f01-9c1ac187cccf" containerName="util" Dec 02 20:24:54 crc kubenswrapper[4796]: E1202 20:24:54.448187 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f40dddc-3877-418d-8f01-9c1ac187cccf" containerName="extract" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.448193 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f40dddc-3877-418d-8f01-9c1ac187cccf" containerName="extract" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.448306 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f40dddc-3877-418d-8f01-9c1ac187cccf" containerName="extract" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.448733 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.452043 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.452463 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.452642 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.453062 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-lk4l9" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.453353 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.453348 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dzjp\" (UniqueName: \"kubernetes.io/projected/5f2a6f4c-f52d-443a-bd27-8066451b87f2-kube-api-access-8dzjp\") pod \"metallb-operator-controller-manager-746c7ccd88-dmrmm\" (UID: \"5f2a6f4c-f52d-443a-bd27-8066451b87f2\") " pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.453602 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f2a6f4c-f52d-443a-bd27-8066451b87f2-apiservice-cert\") pod \"metallb-operator-controller-manager-746c7ccd88-dmrmm\" (UID: \"5f2a6f4c-f52d-443a-bd27-8066451b87f2\") " pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.453738 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f2a6f4c-f52d-443a-bd27-8066451b87f2-webhook-cert\") pod \"metallb-operator-controller-manager-746c7ccd88-dmrmm\" (UID: \"5f2a6f4c-f52d-443a-bd27-8066451b87f2\") " pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.473643 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm"] Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.555110 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f2a6f4c-f52d-443a-bd27-8066451b87f2-apiservice-cert\") pod \"metallb-operator-controller-manager-746c7ccd88-dmrmm\" (UID: \"5f2a6f4c-f52d-443a-bd27-8066451b87f2\") " pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.555243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f2a6f4c-f52d-443a-bd27-8066451b87f2-webhook-cert\") pod \"metallb-operator-controller-manager-746c7ccd88-dmrmm\" (UID: \"5f2a6f4c-f52d-443a-bd27-8066451b87f2\") " pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.555325 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dzjp\" (UniqueName: \"kubernetes.io/projected/5f2a6f4c-f52d-443a-bd27-8066451b87f2-kube-api-access-8dzjp\") pod \"metallb-operator-controller-manager-746c7ccd88-dmrmm\" (UID: \"5f2a6f4c-f52d-443a-bd27-8066451b87f2\") " pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.562984 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f2a6f4c-f52d-443a-bd27-8066451b87f2-webhook-cert\") pod \"metallb-operator-controller-manager-746c7ccd88-dmrmm\" (UID: \"5f2a6f4c-f52d-443a-bd27-8066451b87f2\") " pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.563769 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f2a6f4c-f52d-443a-bd27-8066451b87f2-apiservice-cert\") pod \"metallb-operator-controller-manager-746c7ccd88-dmrmm\" (UID: \"5f2a6f4c-f52d-443a-bd27-8066451b87f2\") " pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.586134 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dzjp\" (UniqueName: \"kubernetes.io/projected/5f2a6f4c-f52d-443a-bd27-8066451b87f2-kube-api-access-8dzjp\") pod \"metallb-operator-controller-manager-746c7ccd88-dmrmm\" (UID: \"5f2a6f4c-f52d-443a-bd27-8066451b87f2\") " pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.767625 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.779427 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd"] Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.780597 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.787105 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.787332 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.788909 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-2vgzz" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.812322 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd"] Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.959976 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwkq7\" (UniqueName: \"kubernetes.io/projected/7622e83f-8b87-470f-b7b1-adb772c3cafa-kube-api-access-zwkq7\") pod \"metallb-operator-webhook-server-5d77994449-g7tpd\" (UID: \"7622e83f-8b87-470f-b7b1-adb772c3cafa\") " pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.960387 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7622e83f-8b87-470f-b7b1-adb772c3cafa-webhook-cert\") pod \"metallb-operator-webhook-server-5d77994449-g7tpd\" (UID: \"7622e83f-8b87-470f-b7b1-adb772c3cafa\") " pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:24:54 crc kubenswrapper[4796]: I1202 20:24:54.960408 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7622e83f-8b87-470f-b7b1-adb772c3cafa-apiservice-cert\") pod \"metallb-operator-webhook-server-5d77994449-g7tpd\" (UID: \"7622e83f-8b87-470f-b7b1-adb772c3cafa\") " pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.061236 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwkq7\" (UniqueName: \"kubernetes.io/projected/7622e83f-8b87-470f-b7b1-adb772c3cafa-kube-api-access-zwkq7\") pod \"metallb-operator-webhook-server-5d77994449-g7tpd\" (UID: \"7622e83f-8b87-470f-b7b1-adb772c3cafa\") " pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.061314 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7622e83f-8b87-470f-b7b1-adb772c3cafa-webhook-cert\") pod \"metallb-operator-webhook-server-5d77994449-g7tpd\" (UID: \"7622e83f-8b87-470f-b7b1-adb772c3cafa\") " pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.061338 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7622e83f-8b87-470f-b7b1-adb772c3cafa-apiservice-cert\") pod \"metallb-operator-webhook-server-5d77994449-g7tpd\" (UID: \"7622e83f-8b87-470f-b7b1-adb772c3cafa\") " pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.067150 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7622e83f-8b87-470f-b7b1-adb772c3cafa-webhook-cert\") pod \"metallb-operator-webhook-server-5d77994449-g7tpd\" (UID: \"7622e83f-8b87-470f-b7b1-adb772c3cafa\") " pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.068404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7622e83f-8b87-470f-b7b1-adb772c3cafa-apiservice-cert\") pod \"metallb-operator-webhook-server-5d77994449-g7tpd\" (UID: \"7622e83f-8b87-470f-b7b1-adb772c3cafa\") " pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.078347 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwkq7\" (UniqueName: \"kubernetes.io/projected/7622e83f-8b87-470f-b7b1-adb772c3cafa-kube-api-access-zwkq7\") pod \"metallb-operator-webhook-server-5d77994449-g7tpd\" (UID: \"7622e83f-8b87-470f-b7b1-adb772c3cafa\") " pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.154993 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.190532 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.190609 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.190663 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.191692 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a547bba042a4ae0a5c7b160e108564e0b4924894b3f1b07d2ef5933a2669d856"} pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.191768 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" containerID="cri-o://a547bba042a4ae0a5c7b160e108564e0b4924894b3f1b07d2ef5933a2669d856" gracePeriod=600 Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.281977 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm"] Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.397812 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd"] Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.602661 4796 generic.go:334] "Generic (PLEG): container finished" podID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerID="a547bba042a4ae0a5c7b160e108564e0b4924894b3f1b07d2ef5933a2669d856" exitCode=0 Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.602742 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerDied","Data":"a547bba042a4ae0a5c7b160e108564e0b4924894b3f1b07d2ef5933a2669d856"} Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.602937 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerStarted","Data":"8d6f0c135e487e19c4f958756870ce83ded4504e5b54dacbb97a36ee8b0a0032"} Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.603015 4796 scope.go:117] "RemoveContainer" containerID="22d1e0fd25ff5e073a4946805e750f34011e82cb383730cfb25bb48ca777f3f4" Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.605235 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" event={"ID":"5f2a6f4c-f52d-443a-bd27-8066451b87f2","Type":"ContainerStarted","Data":"f463182f74a185b94940259448b33c1a875fcb94fa10442270f2bb5c3462d90a"} Dec 02 20:24:55 crc kubenswrapper[4796]: I1202 20:24:55.613425 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" event={"ID":"7622e83f-8b87-470f-b7b1-adb772c3cafa","Type":"ContainerStarted","Data":"ba0c9e7608e1458d70eb9e935e041bb0710b5ead2952378711a3610e12a7f14c"} Dec 02 20:25:02 crc kubenswrapper[4796]: I1202 20:25:02.721379 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" event={"ID":"5f2a6f4c-f52d-443a-bd27-8066451b87f2","Type":"ContainerStarted","Data":"501d0a2a3b098821b04315f8f1348d41deb7d72692936467e7eceb20ce006b6c"} Dec 02 20:25:02 crc kubenswrapper[4796]: I1202 20:25:02.723542 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" event={"ID":"7622e83f-8b87-470f-b7b1-adb772c3cafa","Type":"ContainerStarted","Data":"3969b4d6b89d325c7a37b1767e2bc52f0ed9ebaa5a9bf3152fed5ba3c5010661"} Dec 02 20:25:02 crc kubenswrapper[4796]: I1202 20:25:02.723732 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:25:02 crc kubenswrapper[4796]: I1202 20:25:02.754996 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" podStartSLOduration=2.271552392 podStartE2EDuration="8.754953955s" podCreationTimestamp="2025-12-02 20:24:54 +0000 UTC" firstStartedPulling="2025-12-02 20:24:55.285281167 +0000 UTC m=+778.288656701" lastFinishedPulling="2025-12-02 20:25:01.76868272 +0000 UTC m=+784.772058264" observedRunningTime="2025-12-02 20:25:02.745950696 +0000 UTC m=+785.749326270" watchObservedRunningTime="2025-12-02 20:25:02.754953955 +0000 UTC m=+785.758329479" Dec 02 20:25:02 crc kubenswrapper[4796]: I1202 20:25:02.775975 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" podStartSLOduration=2.408316035 podStartE2EDuration="8.775941435s" podCreationTimestamp="2025-12-02 20:24:54 +0000 UTC" firstStartedPulling="2025-12-02 20:24:55.41956549 +0000 UTC m=+778.422941024" lastFinishedPulling="2025-12-02 20:25:01.78719089 +0000 UTC m=+784.790566424" observedRunningTime="2025-12-02 20:25:02.773856144 +0000 UTC m=+785.777231748" watchObservedRunningTime="2025-12-02 20:25:02.775941435 +0000 UTC m=+785.779317019" Dec 02 20:25:03 crc kubenswrapper[4796]: I1202 20:25:03.047024 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:25:03 crc kubenswrapper[4796]: I1202 20:25:03.099017 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:25:03 crc kubenswrapper[4796]: I1202 20:25:03.292812 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wlxs7"] Dec 02 20:25:03 crc kubenswrapper[4796]: I1202 20:25:03.729320 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:25:04 crc kubenswrapper[4796]: I1202 20:25:04.738408 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wlxs7" podUID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" containerName="registry-server" containerID="cri-o://ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be" gracePeriod=2 Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.705768 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.751767 4796 generic.go:334] "Generic (PLEG): container finished" podID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" containerID="ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be" exitCode=0 Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.751834 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlxs7" event={"ID":"d224fbb7-46b7-4c42-95b0-c05149c50ec1","Type":"ContainerDied","Data":"ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be"} Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.751843 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wlxs7" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.751904 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlxs7" event={"ID":"d224fbb7-46b7-4c42-95b0-c05149c50ec1","Type":"ContainerDied","Data":"1ce7aea3185a78a6296f9132fe6fc2a02fe4870acde89bfd76eaeb10f2c56a8d"} Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.751924 4796 scope.go:117] "RemoveContainer" containerID="ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.790284 4796 scope.go:117] "RemoveContainer" containerID="a8cc53cdf63e88f95b4dab6b5cfd2597bda77d3e37f4f96b340adec3004d8ceb" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.810728 4796 scope.go:117] "RemoveContainer" containerID="6e5c7498cefe50e3511f82d5c5dff9c8435d00b4de72887cd2fc9f00423ea412" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.833486 4796 scope.go:117] "RemoveContainer" containerID="ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be" Dec 02 20:25:05 crc kubenswrapper[4796]: E1202 20:25:05.833939 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be\": container with ID starting with ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be not found: ID does not exist" containerID="ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.834016 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be"} err="failed to get container status \"ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be\": rpc error: code = NotFound desc = could not find container \"ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be\": container with ID starting with ad10a84ec4bf1a24029cb1b450554d825028de0f1ebbe6f7e4503c184ce936be not found: ID does not exist" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.834055 4796 scope.go:117] "RemoveContainer" containerID="a8cc53cdf63e88f95b4dab6b5cfd2597bda77d3e37f4f96b340adec3004d8ceb" Dec 02 20:25:05 crc kubenswrapper[4796]: E1202 20:25:05.834709 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8cc53cdf63e88f95b4dab6b5cfd2597bda77d3e37f4f96b340adec3004d8ceb\": container with ID starting with a8cc53cdf63e88f95b4dab6b5cfd2597bda77d3e37f4f96b340adec3004d8ceb not found: ID does not exist" containerID="a8cc53cdf63e88f95b4dab6b5cfd2597bda77d3e37f4f96b340adec3004d8ceb" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.834766 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8cc53cdf63e88f95b4dab6b5cfd2597bda77d3e37f4f96b340adec3004d8ceb"} err="failed to get container status \"a8cc53cdf63e88f95b4dab6b5cfd2597bda77d3e37f4f96b340adec3004d8ceb\": rpc error: code = NotFound desc = could not find container \"a8cc53cdf63e88f95b4dab6b5cfd2597bda77d3e37f4f96b340adec3004d8ceb\": container with ID starting with a8cc53cdf63e88f95b4dab6b5cfd2597bda77d3e37f4f96b340adec3004d8ceb not found: ID does not exist" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.834826 4796 scope.go:117] "RemoveContainer" containerID="6e5c7498cefe50e3511f82d5c5dff9c8435d00b4de72887cd2fc9f00423ea412" Dec 02 20:25:05 crc kubenswrapper[4796]: E1202 20:25:05.835201 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e5c7498cefe50e3511f82d5c5dff9c8435d00b4de72887cd2fc9f00423ea412\": container with ID starting with 6e5c7498cefe50e3511f82d5c5dff9c8435d00b4de72887cd2fc9f00423ea412 not found: ID does not exist" containerID="6e5c7498cefe50e3511f82d5c5dff9c8435d00b4de72887cd2fc9f00423ea412" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.835232 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e5c7498cefe50e3511f82d5c5dff9c8435d00b4de72887cd2fc9f00423ea412"} err="failed to get container status \"6e5c7498cefe50e3511f82d5c5dff9c8435d00b4de72887cd2fc9f00423ea412\": rpc error: code = NotFound desc = could not find container \"6e5c7498cefe50e3511f82d5c5dff9c8435d00b4de72887cd2fc9f00423ea412\": container with ID starting with 6e5c7498cefe50e3511f82d5c5dff9c8435d00b4de72887cd2fc9f00423ea412 not found: ID does not exist" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.848200 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d224fbb7-46b7-4c42-95b0-c05149c50ec1-utilities\") pod \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\" (UID: \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\") " Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.848327 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d224fbb7-46b7-4c42-95b0-c05149c50ec1-catalog-content\") pod \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\" (UID: \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\") " Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.848497 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks7wd\" (UniqueName: \"kubernetes.io/projected/d224fbb7-46b7-4c42-95b0-c05149c50ec1-kube-api-access-ks7wd\") pod \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\" (UID: \"d224fbb7-46b7-4c42-95b0-c05149c50ec1\") " Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.849217 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d224fbb7-46b7-4c42-95b0-c05149c50ec1-utilities" (OuterVolumeSpecName: "utilities") pod "d224fbb7-46b7-4c42-95b0-c05149c50ec1" (UID: "d224fbb7-46b7-4c42-95b0-c05149c50ec1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.855499 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d224fbb7-46b7-4c42-95b0-c05149c50ec1-kube-api-access-ks7wd" (OuterVolumeSpecName: "kube-api-access-ks7wd") pod "d224fbb7-46b7-4c42-95b0-c05149c50ec1" (UID: "d224fbb7-46b7-4c42-95b0-c05149c50ec1"). InnerVolumeSpecName "kube-api-access-ks7wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.950644 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks7wd\" (UniqueName: \"kubernetes.io/projected/d224fbb7-46b7-4c42-95b0-c05149c50ec1-kube-api-access-ks7wd\") on node \"crc\" DevicePath \"\"" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.950712 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d224fbb7-46b7-4c42-95b0-c05149c50ec1-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:25:05 crc kubenswrapper[4796]: I1202 20:25:05.956068 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d224fbb7-46b7-4c42-95b0-c05149c50ec1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d224fbb7-46b7-4c42-95b0-c05149c50ec1" (UID: "d224fbb7-46b7-4c42-95b0-c05149c50ec1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:25:06 crc kubenswrapper[4796]: I1202 20:25:06.052678 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d224fbb7-46b7-4c42-95b0-c05149c50ec1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:25:06 crc kubenswrapper[4796]: I1202 20:25:06.081543 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wlxs7"] Dec 02 20:25:06 crc kubenswrapper[4796]: I1202 20:25:06.087698 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wlxs7"] Dec 02 20:25:07 crc kubenswrapper[4796]: I1202 20:25:07.275113 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" path="/var/lib/kubelet/pods/d224fbb7-46b7-4c42-95b0-c05149c50ec1/volumes" Dec 02 20:25:15 crc kubenswrapper[4796]: I1202 20:25:15.159917 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5d77994449-g7tpd" Dec 02 20:25:34 crc kubenswrapper[4796]: I1202 20:25:34.771385 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-746c7ccd88-dmrmm" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.476875 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795"] Dec 02 20:25:35 crc kubenswrapper[4796]: E1202 20:25:35.477282 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" containerName="extract-utilities" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.477304 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" containerName="extract-utilities" Dec 02 20:25:35 crc kubenswrapper[4796]: E1202 20:25:35.477327 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" containerName="registry-server" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.477336 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" containerName="registry-server" Dec 02 20:25:35 crc kubenswrapper[4796]: E1202 20:25:35.477352 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" containerName="extract-content" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.477360 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" containerName="extract-content" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.477496 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d224fbb7-46b7-4c42-95b0-c05149c50ec1" containerName="registry-server" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.478102 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.479962 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.480230 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-hmt5d" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.480938 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-bgvx8"] Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.484445 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.486091 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.487332 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.493709 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795"] Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.511948 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bbaf8931-75f7-42ae-a13d-69218a478762-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-lz795\" (UID: \"bbaf8931-75f7-42ae-a13d-69218a478762\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.512001 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w89tk\" (UniqueName: \"kubernetes.io/projected/15e0e87e-e84b-4d87-9e25-224fd500c3a6-kube-api-access-w89tk\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.512069 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/15e0e87e-e84b-4d87-9e25-224fd500c3a6-metrics\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.512100 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/15e0e87e-e84b-4d87-9e25-224fd500c3a6-frr-startup\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.512125 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/15e0e87e-e84b-4d87-9e25-224fd500c3a6-reloader\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.512146 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/15e0e87e-e84b-4d87-9e25-224fd500c3a6-frr-conf\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.512170 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/15e0e87e-e84b-4d87-9e25-224fd500c3a6-frr-sockets\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.512200 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15e0e87e-e84b-4d87-9e25-224fd500c3a6-metrics-certs\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.512223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjlrf\" (UniqueName: \"kubernetes.io/projected/bbaf8931-75f7-42ae-a13d-69218a478762-kube-api-access-mjlrf\") pod \"frr-k8s-webhook-server-7fcb986d4-lz795\" (UID: \"bbaf8931-75f7-42ae-a13d-69218a478762\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.606317 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7h6dq"] Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.607363 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7h6dq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.613200 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/15e0e87e-e84b-4d87-9e25-224fd500c3a6-frr-startup\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.613243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/15e0e87e-e84b-4d87-9e25-224fd500c3a6-reloader\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.614039 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/15e0e87e-e84b-4d87-9e25-224fd500c3a6-frr-conf\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.614153 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/15e0e87e-e84b-4d87-9e25-224fd500c3a6-reloader\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.614297 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/15e0e87e-e84b-4d87-9e25-224fd500c3a6-frr-sockets\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.614356 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15e0e87e-e84b-4d87-9e25-224fd500c3a6-metrics-certs\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.614386 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjlrf\" (UniqueName: \"kubernetes.io/projected/bbaf8931-75f7-42ae-a13d-69218a478762-kube-api-access-mjlrf\") pod \"frr-k8s-webhook-server-7fcb986d4-lz795\" (UID: \"bbaf8931-75f7-42ae-a13d-69218a478762\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.614484 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/15e0e87e-e84b-4d87-9e25-224fd500c3a6-frr-conf\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: E1202 20:25:35.614643 4796 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 02 20:25:35 crc kubenswrapper[4796]: E1202 20:25:35.614725 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15e0e87e-e84b-4d87-9e25-224fd500c3a6-metrics-certs podName:15e0e87e-e84b-4d87-9e25-224fd500c3a6 nodeName:}" failed. No retries permitted until 2025-12-02 20:25:36.114700161 +0000 UTC m=+819.118075695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15e0e87e-e84b-4d87-9e25-224fd500c3a6-metrics-certs") pod "frr-k8s-bgvx8" (UID: "15e0e87e-e84b-4d87-9e25-224fd500c3a6") : secret "frr-k8s-certs-secret" not found Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.614834 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/15e0e87e-e84b-4d87-9e25-224fd500c3a6-frr-startup\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.620289 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bbaf8931-75f7-42ae-a13d-69218a478762-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-lz795\" (UID: \"bbaf8931-75f7-42ae-a13d-69218a478762\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.620376 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w89tk\" (UniqueName: \"kubernetes.io/projected/15e0e87e-e84b-4d87-9e25-224fd500c3a6-kube-api-access-w89tk\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.620528 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/15e0e87e-e84b-4d87-9e25-224fd500c3a6-metrics\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.620971 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/15e0e87e-e84b-4d87-9e25-224fd500c3a6-metrics\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.621670 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.638384 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.638616 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ljlcq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.638800 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.653814 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/15e0e87e-e84b-4d87-9e25-224fd500c3a6-frr-sockets\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.681237 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bbaf8931-75f7-42ae-a13d-69218a478762-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-lz795\" (UID: \"bbaf8931-75f7-42ae-a13d-69218a478762\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.690097 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjlrf\" (UniqueName: \"kubernetes.io/projected/bbaf8931-75f7-42ae-a13d-69218a478762-kube-api-access-mjlrf\") pod \"frr-k8s-webhook-server-7fcb986d4-lz795\" (UID: \"bbaf8931-75f7-42ae-a13d-69218a478762\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.712027 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w89tk\" (UniqueName: \"kubernetes.io/projected/15e0e87e-e84b-4d87-9e25-224fd500c3a6-kube-api-access-w89tk\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.716080 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-pw5b4"] Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.742093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7f44f3c0-eb38-4b85-a27d-94aa92562837-metallb-excludel2\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.742160 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7f44f3c0-eb38-4b85-a27d-94aa92562837-memberlist\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.742185 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f44f3c0-eb38-4b85-a27d-94aa92562837-metrics-certs\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.742201 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44r5d\" (UniqueName: \"kubernetes.io/projected/7f44f3c0-eb38-4b85-a27d-94aa92562837-kube-api-access-44r5d\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.743144 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.746466 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.766488 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-pw5b4"] Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.815606 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.844053 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b7cba022-cc8d-457b-86e9-d97b822f03a4-metrics-certs\") pod \"controller-f8648f98b-pw5b4\" (UID: \"b7cba022-cc8d-457b-86e9-d97b822f03a4\") " pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.844145 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7cba022-cc8d-457b-86e9-d97b822f03a4-cert\") pod \"controller-f8648f98b-pw5b4\" (UID: \"b7cba022-cc8d-457b-86e9-d97b822f03a4\") " pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.844199 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7f44f3c0-eb38-4b85-a27d-94aa92562837-metallb-excludel2\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.844231 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7f44f3c0-eb38-4b85-a27d-94aa92562837-memberlist\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.844273 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f44f3c0-eb38-4b85-a27d-94aa92562837-metrics-certs\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.844291 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44r5d\" (UniqueName: \"kubernetes.io/projected/7f44f3c0-eb38-4b85-a27d-94aa92562837-kube-api-access-44r5d\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.844315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mp6d\" (UniqueName: \"kubernetes.io/projected/b7cba022-cc8d-457b-86e9-d97b822f03a4-kube-api-access-2mp6d\") pod \"controller-f8648f98b-pw5b4\" (UID: \"b7cba022-cc8d-457b-86e9-d97b822f03a4\") " pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:35 crc kubenswrapper[4796]: E1202 20:25:35.844366 4796 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 20:25:35 crc kubenswrapper[4796]: E1202 20:25:35.844458 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f44f3c0-eb38-4b85-a27d-94aa92562837-memberlist podName:7f44f3c0-eb38-4b85-a27d-94aa92562837 nodeName:}" failed. No retries permitted until 2025-12-02 20:25:36.344432181 +0000 UTC m=+819.347807715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7f44f3c0-eb38-4b85-a27d-94aa92562837-memberlist") pod "speaker-7h6dq" (UID: "7f44f3c0-eb38-4b85-a27d-94aa92562837") : secret "metallb-memberlist" not found Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.845060 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7f44f3c0-eb38-4b85-a27d-94aa92562837-metallb-excludel2\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.859750 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f44f3c0-eb38-4b85-a27d-94aa92562837-metrics-certs\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.882137 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44r5d\" (UniqueName: \"kubernetes.io/projected/7f44f3c0-eb38-4b85-a27d-94aa92562837-kube-api-access-44r5d\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.945083 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mp6d\" (UniqueName: \"kubernetes.io/projected/b7cba022-cc8d-457b-86e9-d97b822f03a4-kube-api-access-2mp6d\") pod \"controller-f8648f98b-pw5b4\" (UID: \"b7cba022-cc8d-457b-86e9-d97b822f03a4\") " pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.945143 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b7cba022-cc8d-457b-86e9-d97b822f03a4-metrics-certs\") pod \"controller-f8648f98b-pw5b4\" (UID: \"b7cba022-cc8d-457b-86e9-d97b822f03a4\") " pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.945169 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7cba022-cc8d-457b-86e9-d97b822f03a4-cert\") pod \"controller-f8648f98b-pw5b4\" (UID: \"b7cba022-cc8d-457b-86e9-d97b822f03a4\") " pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.951115 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b7cba022-cc8d-457b-86e9-d97b822f03a4-metrics-certs\") pod \"controller-f8648f98b-pw5b4\" (UID: \"b7cba022-cc8d-457b-86e9-d97b822f03a4\") " pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.952831 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.959178 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7cba022-cc8d-457b-86e9-d97b822f03a4-cert\") pod \"controller-f8648f98b-pw5b4\" (UID: \"b7cba022-cc8d-457b-86e9-d97b822f03a4\") " pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:35 crc kubenswrapper[4796]: I1202 20:25:35.972445 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mp6d\" (UniqueName: \"kubernetes.io/projected/b7cba022-cc8d-457b-86e9-d97b822f03a4-kube-api-access-2mp6d\") pod \"controller-f8648f98b-pw5b4\" (UID: \"b7cba022-cc8d-457b-86e9-d97b822f03a4\") " pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.083888 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795"] Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.084691 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.147210 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15e0e87e-e84b-4d87-9e25-224fd500c3a6-metrics-certs\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.151549 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15e0e87e-e84b-4d87-9e25-224fd500c3a6-metrics-certs\") pod \"frr-k8s-bgvx8\" (UID: \"15e0e87e-e84b-4d87-9e25-224fd500c3a6\") " pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.307197 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-pw5b4"] Dec 02 20:25:36 crc kubenswrapper[4796]: W1202 20:25:36.310680 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7cba022_cc8d_457b_86e9_d97b822f03a4.slice/crio-40bbb60e5f0554cf3529c397c21310d11a51f97dbdff728729a87e7d1a1b2a20 WatchSource:0}: Error finding container 40bbb60e5f0554cf3529c397c21310d11a51f97dbdff728729a87e7d1a1b2a20: Status 404 returned error can't find the container with id 40bbb60e5f0554cf3529c397c21310d11a51f97dbdff728729a87e7d1a1b2a20 Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.349247 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7f44f3c0-eb38-4b85-a27d-94aa92562837-memberlist\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:36 crc kubenswrapper[4796]: E1202 20:25:36.349531 4796 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 20:25:36 crc kubenswrapper[4796]: E1202 20:25:36.349693 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f44f3c0-eb38-4b85-a27d-94aa92562837-memberlist podName:7f44f3c0-eb38-4b85-a27d-94aa92562837 nodeName:}" failed. No retries permitted until 2025-12-02 20:25:37.349635022 +0000 UTC m=+820.353010596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7f44f3c0-eb38-4b85-a27d-94aa92562837-memberlist") pod "speaker-7h6dq" (UID: "7f44f3c0-eb38-4b85-a27d-94aa92562837") : secret "metallb-memberlist" not found Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.424854 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.992842 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" event={"ID":"bbaf8931-75f7-42ae-a13d-69218a478762","Type":"ContainerStarted","Data":"d9db5a1b165b1a2bc0095475a40ac22c50d5b351edb72dd000517b621ea31419"} Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.995086 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bgvx8" event={"ID":"15e0e87e-e84b-4d87-9e25-224fd500c3a6","Type":"ContainerStarted","Data":"ec4b537f37406bc1dd605cddd608b69c1b8476c0270b8ee90adca8e6aec16164"} Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.997356 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-pw5b4" event={"ID":"b7cba022-cc8d-457b-86e9-d97b822f03a4","Type":"ContainerStarted","Data":"a972c2d41b1d02e9cd923136e2703e25a84eece137ee246ddf7c2981794163b9"} Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.997413 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-pw5b4" event={"ID":"b7cba022-cc8d-457b-86e9-d97b822f03a4","Type":"ContainerStarted","Data":"1a1f1b5e252ce3f808dfae9f95f29204890841572028c0a835f57aaeee1a585f"} Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.997429 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-pw5b4" event={"ID":"b7cba022-cc8d-457b-86e9-d97b822f03a4","Type":"ContainerStarted","Data":"40bbb60e5f0554cf3529c397c21310d11a51f97dbdff728729a87e7d1a1b2a20"} Dec 02 20:25:36 crc kubenswrapper[4796]: I1202 20:25:36.997565 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:37 crc kubenswrapper[4796]: I1202 20:25:37.023645 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-pw5b4" podStartSLOduration=2.023623072 podStartE2EDuration="2.023623072s" podCreationTimestamp="2025-12-02 20:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:25:37.019311148 +0000 UTC m=+820.022686692" watchObservedRunningTime="2025-12-02 20:25:37.023623072 +0000 UTC m=+820.026998616" Dec 02 20:25:37 crc kubenswrapper[4796]: I1202 20:25:37.370577 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7f44f3c0-eb38-4b85-a27d-94aa92562837-memberlist\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:37 crc kubenswrapper[4796]: I1202 20:25:37.377786 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7f44f3c0-eb38-4b85-a27d-94aa92562837-memberlist\") pod \"speaker-7h6dq\" (UID: \"7f44f3c0-eb38-4b85-a27d-94aa92562837\") " pod="metallb-system/speaker-7h6dq" Dec 02 20:25:37 crc kubenswrapper[4796]: I1202 20:25:37.478619 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7h6dq" Dec 02 20:25:38 crc kubenswrapper[4796]: I1202 20:25:38.013005 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7h6dq" event={"ID":"7f44f3c0-eb38-4b85-a27d-94aa92562837","Type":"ContainerStarted","Data":"14ec46f34cd91fdb7731295b73ead4dd9e007ad54d7ffae915a68cedbbe5cb8c"} Dec 02 20:25:38 crc kubenswrapper[4796]: I1202 20:25:38.013458 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7h6dq" event={"ID":"7f44f3c0-eb38-4b85-a27d-94aa92562837","Type":"ContainerStarted","Data":"7f03c3e815d18958db791971c827063f150e87c818ceb3521689a01b6330ab87"} Dec 02 20:25:39 crc kubenswrapper[4796]: I1202 20:25:39.043370 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7h6dq" event={"ID":"7f44f3c0-eb38-4b85-a27d-94aa92562837","Type":"ContainerStarted","Data":"2a9320c3db4d5efc4cb20fdb716e50ea470c67dc9b1093f3b5c47b519c3261f5"} Dec 02 20:25:39 crc kubenswrapper[4796]: I1202 20:25:39.044908 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7h6dq" Dec 02 20:25:39 crc kubenswrapper[4796]: I1202 20:25:39.082027 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7h6dq" podStartSLOduration=4.081993908 podStartE2EDuration="4.081993908s" podCreationTimestamp="2025-12-02 20:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:25:39.066724057 +0000 UTC m=+822.070099591" watchObservedRunningTime="2025-12-02 20:25:39.081993908 +0000 UTC m=+822.085369442" Dec 02 20:25:44 crc kubenswrapper[4796]: I1202 20:25:44.087237 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" event={"ID":"bbaf8931-75f7-42ae-a13d-69218a478762","Type":"ContainerStarted","Data":"f01317b94b8f0e5971e34efec2b8f7111c2cf211b0d183df7e309aa81da78581"} Dec 02 20:25:44 crc kubenswrapper[4796]: I1202 20:25:44.088201 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" Dec 02 20:25:44 crc kubenswrapper[4796]: I1202 20:25:44.090085 4796 generic.go:334] "Generic (PLEG): container finished" podID="15e0e87e-e84b-4d87-9e25-224fd500c3a6" containerID="be7ef28dc9b999175e1f89a8aba23726349f39fb9ec8b10ffb4128a8fe6abcd7" exitCode=0 Dec 02 20:25:44 crc kubenswrapper[4796]: I1202 20:25:44.090142 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bgvx8" event={"ID":"15e0e87e-e84b-4d87-9e25-224fd500c3a6","Type":"ContainerDied","Data":"be7ef28dc9b999175e1f89a8aba23726349f39fb9ec8b10ffb4128a8fe6abcd7"} Dec 02 20:25:44 crc kubenswrapper[4796]: I1202 20:25:44.110914 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" podStartSLOduration=1.388056955 podStartE2EDuration="9.110884434s" podCreationTimestamp="2025-12-02 20:25:35 +0000 UTC" firstStartedPulling="2025-12-02 20:25:36.10377907 +0000 UTC m=+819.107154604" lastFinishedPulling="2025-12-02 20:25:43.826606539 +0000 UTC m=+826.829982083" observedRunningTime="2025-12-02 20:25:44.103081524 +0000 UTC m=+827.106457068" watchObservedRunningTime="2025-12-02 20:25:44.110884434 +0000 UTC m=+827.114259988" Dec 02 20:25:45 crc kubenswrapper[4796]: I1202 20:25:45.099504 4796 generic.go:334] "Generic (PLEG): container finished" podID="15e0e87e-e84b-4d87-9e25-224fd500c3a6" containerID="548363f6dc7682695d0a61be01bc74a194bea6a92c5b8f8d2094edb32bbd19aa" exitCode=0 Dec 02 20:25:45 crc kubenswrapper[4796]: I1202 20:25:45.099585 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bgvx8" event={"ID":"15e0e87e-e84b-4d87-9e25-224fd500c3a6","Type":"ContainerDied","Data":"548363f6dc7682695d0a61be01bc74a194bea6a92c5b8f8d2094edb32bbd19aa"} Dec 02 20:25:46 crc kubenswrapper[4796]: I1202 20:25:46.091310 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-pw5b4" Dec 02 20:25:46 crc kubenswrapper[4796]: I1202 20:25:46.110296 4796 generic.go:334] "Generic (PLEG): container finished" podID="15e0e87e-e84b-4d87-9e25-224fd500c3a6" containerID="fa69d4997de6be1298fd11cae0673bae8d3d2b82a351256305e8b2dbbc9f43eb" exitCode=0 Dec 02 20:25:46 crc kubenswrapper[4796]: I1202 20:25:46.110345 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bgvx8" event={"ID":"15e0e87e-e84b-4d87-9e25-224fd500c3a6","Type":"ContainerDied","Data":"fa69d4997de6be1298fd11cae0673bae8d3d2b82a351256305e8b2dbbc9f43eb"} Dec 02 20:25:47 crc kubenswrapper[4796]: I1202 20:25:47.132666 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bgvx8" event={"ID":"15e0e87e-e84b-4d87-9e25-224fd500c3a6","Type":"ContainerStarted","Data":"b33f50cb6cbcc0f9a722b5962f1e0b6a57bcc2b3f360f18b9b1f60e5de1f2768"} Dec 02 20:25:47 crc kubenswrapper[4796]: I1202 20:25:47.133481 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bgvx8" event={"ID":"15e0e87e-e84b-4d87-9e25-224fd500c3a6","Type":"ContainerStarted","Data":"1ebbc63c9fce3826d69fb0e645a9fba278fe8148a55b63519c23276a9736dd9d"} Dec 02 20:25:47 crc kubenswrapper[4796]: I1202 20:25:47.133508 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bgvx8" event={"ID":"15e0e87e-e84b-4d87-9e25-224fd500c3a6","Type":"ContainerStarted","Data":"5c289827759c3fbe65d7b139384a67573e20bba3af36b855bc8eeb04da41be53"} Dec 02 20:25:47 crc kubenswrapper[4796]: I1202 20:25:47.133520 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bgvx8" event={"ID":"15e0e87e-e84b-4d87-9e25-224fd500c3a6","Type":"ContainerStarted","Data":"8b287d83fbe744500b911398c1c851b2aaf1dc715c164184d57f6d3fc78efec1"} Dec 02 20:25:47 crc kubenswrapper[4796]: I1202 20:25:47.133531 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bgvx8" event={"ID":"15e0e87e-e84b-4d87-9e25-224fd500c3a6","Type":"ContainerStarted","Data":"7de71da519e68c21c99a8d667f3795d81bceb458bf300236db87351a5a2edcd5"} Dec 02 20:25:47 crc kubenswrapper[4796]: I1202 20:25:47.482092 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7h6dq" Dec 02 20:25:48 crc kubenswrapper[4796]: I1202 20:25:48.147511 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bgvx8" event={"ID":"15e0e87e-e84b-4d87-9e25-224fd500c3a6","Type":"ContainerStarted","Data":"918cac0824a59be3a809136adfd894d5b8c12994338d827d662d8854b5ac40d9"} Dec 02 20:25:48 crc kubenswrapper[4796]: I1202 20:25:48.147850 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:48 crc kubenswrapper[4796]: I1202 20:25:48.189686 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-bgvx8" podStartSLOduration=6.084865085 podStartE2EDuration="13.189657673s" podCreationTimestamp="2025-12-02 20:25:35 +0000 UTC" firstStartedPulling="2025-12-02 20:25:36.701757234 +0000 UTC m=+819.705132778" lastFinishedPulling="2025-12-02 20:25:43.806549822 +0000 UTC m=+826.809925366" observedRunningTime="2025-12-02 20:25:48.182796616 +0000 UTC m=+831.186172170" watchObservedRunningTime="2025-12-02 20:25:48.189657673 +0000 UTC m=+831.193033247" Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.357001 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh"] Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.358999 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.361014 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.375217 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh"] Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.493100 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7gxf\" (UniqueName: \"kubernetes.io/projected/41224180-82e9-44bf-960c-c1a21df6f98e-kube-api-access-c7gxf\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh\" (UID: \"41224180-82e9-44bf-960c-c1a21df6f98e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.493202 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41224180-82e9-44bf-960c-c1a21df6f98e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh\" (UID: \"41224180-82e9-44bf-960c-c1a21df6f98e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.493278 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41224180-82e9-44bf-960c-c1a21df6f98e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh\" (UID: \"41224180-82e9-44bf-960c-c1a21df6f98e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.595044 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41224180-82e9-44bf-960c-c1a21df6f98e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh\" (UID: \"41224180-82e9-44bf-960c-c1a21df6f98e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.595193 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41224180-82e9-44bf-960c-c1a21df6f98e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh\" (UID: \"41224180-82e9-44bf-960c-c1a21df6f98e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.595349 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7gxf\" (UniqueName: \"kubernetes.io/projected/41224180-82e9-44bf-960c-c1a21df6f98e-kube-api-access-c7gxf\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh\" (UID: \"41224180-82e9-44bf-960c-c1a21df6f98e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.595630 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41224180-82e9-44bf-960c-c1a21df6f98e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh\" (UID: \"41224180-82e9-44bf-960c-c1a21df6f98e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.595817 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41224180-82e9-44bf-960c-c1a21df6f98e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh\" (UID: \"41224180-82e9-44bf-960c-c1a21df6f98e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.620235 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7gxf\" (UniqueName: \"kubernetes.io/projected/41224180-82e9-44bf-960c-c1a21df6f98e-kube-api-access-c7gxf\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh\" (UID: \"41224180-82e9-44bf-960c-c1a21df6f98e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:25:49 crc kubenswrapper[4796]: I1202 20:25:49.677776 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:25:50 crc kubenswrapper[4796]: I1202 20:25:50.066547 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh"] Dec 02 20:25:50 crc kubenswrapper[4796]: I1202 20:25:50.162157 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" event={"ID":"41224180-82e9-44bf-960c-c1a21df6f98e","Type":"ContainerStarted","Data":"fb17f77087306614d6e79faccc39c19a1f33ba3cb35cd1d1bc38554a3dee4606"} Dec 02 20:25:51 crc kubenswrapper[4796]: I1202 20:25:51.174615 4796 generic.go:334] "Generic (PLEG): container finished" podID="41224180-82e9-44bf-960c-c1a21df6f98e" containerID="6799aba4cf60b15610087e6149aaa5adbc190f0530aebdfbe5bdf06b7fecf449" exitCode=0 Dec 02 20:25:51 crc kubenswrapper[4796]: I1202 20:25:51.174741 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" event={"ID":"41224180-82e9-44bf-960c-c1a21df6f98e","Type":"ContainerDied","Data":"6799aba4cf60b15610087e6149aaa5adbc190f0530aebdfbe5bdf06b7fecf449"} Dec 02 20:25:51 crc kubenswrapper[4796]: I1202 20:25:51.425673 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:51 crc kubenswrapper[4796]: I1202 20:25:51.478303 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:55 crc kubenswrapper[4796]: I1202 20:25:55.214880 4796 generic.go:334] "Generic (PLEG): container finished" podID="41224180-82e9-44bf-960c-c1a21df6f98e" containerID="789fca101ba9a8b0bd32ce1ca7cb7cbbd17e3a4985b4461461f001ad4acda87d" exitCode=0 Dec 02 20:25:55 crc kubenswrapper[4796]: I1202 20:25:55.214983 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" event={"ID":"41224180-82e9-44bf-960c-c1a21df6f98e","Type":"ContainerDied","Data":"789fca101ba9a8b0bd32ce1ca7cb7cbbd17e3a4985b4461461f001ad4acda87d"} Dec 02 20:25:55 crc kubenswrapper[4796]: I1202 20:25:55.845472 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lz795" Dec 02 20:25:56 crc kubenswrapper[4796]: I1202 20:25:56.228849 4796 generic.go:334] "Generic (PLEG): container finished" podID="41224180-82e9-44bf-960c-c1a21df6f98e" containerID="e9b2342044397c22f07ba7e508b1c06f3d4f1e66bd828d0dabdb4541fbcc3fc4" exitCode=0 Dec 02 20:25:56 crc kubenswrapper[4796]: I1202 20:25:56.228951 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" event={"ID":"41224180-82e9-44bf-960c-c1a21df6f98e","Type":"ContainerDied","Data":"e9b2342044397c22f07ba7e508b1c06f3d4f1e66bd828d0dabdb4541fbcc3fc4"} Dec 02 20:25:56 crc kubenswrapper[4796]: I1202 20:25:56.429278 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-bgvx8" Dec 02 20:25:57 crc kubenswrapper[4796]: I1202 20:25:57.536977 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:25:57 crc kubenswrapper[4796]: I1202 20:25:57.627303 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41224180-82e9-44bf-960c-c1a21df6f98e-bundle\") pod \"41224180-82e9-44bf-960c-c1a21df6f98e\" (UID: \"41224180-82e9-44bf-960c-c1a21df6f98e\") " Dec 02 20:25:57 crc kubenswrapper[4796]: I1202 20:25:57.627385 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7gxf\" (UniqueName: \"kubernetes.io/projected/41224180-82e9-44bf-960c-c1a21df6f98e-kube-api-access-c7gxf\") pod \"41224180-82e9-44bf-960c-c1a21df6f98e\" (UID: \"41224180-82e9-44bf-960c-c1a21df6f98e\") " Dec 02 20:25:57 crc kubenswrapper[4796]: I1202 20:25:57.627471 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41224180-82e9-44bf-960c-c1a21df6f98e-util\") pod \"41224180-82e9-44bf-960c-c1a21df6f98e\" (UID: \"41224180-82e9-44bf-960c-c1a21df6f98e\") " Dec 02 20:25:57 crc kubenswrapper[4796]: I1202 20:25:57.629309 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41224180-82e9-44bf-960c-c1a21df6f98e-bundle" (OuterVolumeSpecName: "bundle") pod "41224180-82e9-44bf-960c-c1a21df6f98e" (UID: "41224180-82e9-44bf-960c-c1a21df6f98e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:25:57 crc kubenswrapper[4796]: I1202 20:25:57.633866 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41224180-82e9-44bf-960c-c1a21df6f98e-kube-api-access-c7gxf" (OuterVolumeSpecName: "kube-api-access-c7gxf") pod "41224180-82e9-44bf-960c-c1a21df6f98e" (UID: "41224180-82e9-44bf-960c-c1a21df6f98e"). InnerVolumeSpecName "kube-api-access-c7gxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:25:57 crc kubenswrapper[4796]: I1202 20:25:57.637239 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41224180-82e9-44bf-960c-c1a21df6f98e-util" (OuterVolumeSpecName: "util") pod "41224180-82e9-44bf-960c-c1a21df6f98e" (UID: "41224180-82e9-44bf-960c-c1a21df6f98e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:25:57 crc kubenswrapper[4796]: I1202 20:25:57.728574 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41224180-82e9-44bf-960c-c1a21df6f98e-util\") on node \"crc\" DevicePath \"\"" Dec 02 20:25:57 crc kubenswrapper[4796]: I1202 20:25:57.728621 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41224180-82e9-44bf-960c-c1a21df6f98e-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:25:57 crc kubenswrapper[4796]: I1202 20:25:57.728637 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7gxf\" (UniqueName: \"kubernetes.io/projected/41224180-82e9-44bf-960c-c1a21df6f98e-kube-api-access-c7gxf\") on node \"crc\" DevicePath \"\"" Dec 02 20:25:58 crc kubenswrapper[4796]: I1202 20:25:58.248390 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" event={"ID":"41224180-82e9-44bf-960c-c1a21df6f98e","Type":"ContainerDied","Data":"fb17f77087306614d6e79faccc39c19a1f33ba3cb35cd1d1bc38554a3dee4606"} Dec 02 20:25:58 crc kubenswrapper[4796]: I1202 20:25:58.248481 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb17f77087306614d6e79faccc39c19a1f33ba3cb35cd1d1bc38554a3dee4606" Dec 02 20:25:58 crc kubenswrapper[4796]: I1202 20:25:58.248514 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.766686 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4"] Dec 02 20:26:02 crc kubenswrapper[4796]: E1202 20:26:02.767733 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41224180-82e9-44bf-960c-c1a21df6f98e" containerName="util" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.767747 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="41224180-82e9-44bf-960c-c1a21df6f98e" containerName="util" Dec 02 20:26:02 crc kubenswrapper[4796]: E1202 20:26:02.767769 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41224180-82e9-44bf-960c-c1a21df6f98e" containerName="pull" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.767775 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="41224180-82e9-44bf-960c-c1a21df6f98e" containerName="pull" Dec 02 20:26:02 crc kubenswrapper[4796]: E1202 20:26:02.767790 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41224180-82e9-44bf-960c-c1a21df6f98e" containerName="extract" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.767796 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="41224180-82e9-44bf-960c-c1a21df6f98e" containerName="extract" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.767936 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="41224180-82e9-44bf-960c-c1a21df6f98e" containerName="extract" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.768492 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.776339 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.776731 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-w65vp" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.776766 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.800874 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4"] Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.801505 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/885cd18b-0248-40f1-86a6-d02ce409424c-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-zrmf4\" (UID: \"885cd18b-0248-40f1-86a6-d02ce409424c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.801562 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vn9b\" (UniqueName: \"kubernetes.io/projected/885cd18b-0248-40f1-86a6-d02ce409424c-kube-api-access-9vn9b\") pod \"cert-manager-operator-controller-manager-64cf6dff88-zrmf4\" (UID: \"885cd18b-0248-40f1-86a6-d02ce409424c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.902454 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vn9b\" (UniqueName: \"kubernetes.io/projected/885cd18b-0248-40f1-86a6-d02ce409424c-kube-api-access-9vn9b\") pod \"cert-manager-operator-controller-manager-64cf6dff88-zrmf4\" (UID: \"885cd18b-0248-40f1-86a6-d02ce409424c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.902569 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/885cd18b-0248-40f1-86a6-d02ce409424c-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-zrmf4\" (UID: \"885cd18b-0248-40f1-86a6-d02ce409424c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.903051 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/885cd18b-0248-40f1-86a6-d02ce409424c-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-zrmf4\" (UID: \"885cd18b-0248-40f1-86a6-d02ce409424c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4" Dec 02 20:26:02 crc kubenswrapper[4796]: I1202 20:26:02.928488 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vn9b\" (UniqueName: \"kubernetes.io/projected/885cd18b-0248-40f1-86a6-d02ce409424c-kube-api-access-9vn9b\") pod \"cert-manager-operator-controller-manager-64cf6dff88-zrmf4\" (UID: \"885cd18b-0248-40f1-86a6-d02ce409424c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4" Dec 02 20:26:03 crc kubenswrapper[4796]: I1202 20:26:03.085165 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4" Dec 02 20:26:03 crc kubenswrapper[4796]: I1202 20:26:03.321534 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4"] Dec 02 20:26:03 crc kubenswrapper[4796]: W1202 20:26:03.331090 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod885cd18b_0248_40f1_86a6_d02ce409424c.slice/crio-b0c6b5b1221bd6dd10edc65b2386dd10d4e7394c0f9802c6695079827a229df3 WatchSource:0}: Error finding container b0c6b5b1221bd6dd10edc65b2386dd10d4e7394c0f9802c6695079827a229df3: Status 404 returned error can't find the container with id b0c6b5b1221bd6dd10edc65b2386dd10d4e7394c0f9802c6695079827a229df3 Dec 02 20:26:04 crc kubenswrapper[4796]: I1202 20:26:04.291095 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4" event={"ID":"885cd18b-0248-40f1-86a6-d02ce409424c","Type":"ContainerStarted","Data":"b0c6b5b1221bd6dd10edc65b2386dd10d4e7394c0f9802c6695079827a229df3"} Dec 02 20:26:08 crc kubenswrapper[4796]: I1202 20:26:08.325058 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4" event={"ID":"885cd18b-0248-40f1-86a6-d02ce409424c","Type":"ContainerStarted","Data":"47fae08b89bce198a1b45e753abec15c73998e94e129b481cf9fb138500e7bba"} Dec 02 20:26:08 crc kubenswrapper[4796]: I1202 20:26:08.379122 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-zrmf4" podStartSLOduration=2.300765964 podStartE2EDuration="6.379099588s" podCreationTimestamp="2025-12-02 20:26:02 +0000 UTC" firstStartedPulling="2025-12-02 20:26:03.33517036 +0000 UTC m=+846.338545894" lastFinishedPulling="2025-12-02 20:26:07.413503984 +0000 UTC m=+850.416879518" observedRunningTime="2025-12-02 20:26:08.374606298 +0000 UTC m=+851.377981832" watchObservedRunningTime="2025-12-02 20:26:08.379099588 +0000 UTC m=+851.382475112" Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.072085 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-p48zl"] Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.099187 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-p48zl"] Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.099410 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.103066 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-vr6w6" Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.103573 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.103824 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.121726 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb8821bc-4ada-47cf-88c5-210a84203e01-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-p48zl\" (UID: \"cb8821bc-4ada-47cf-88c5-210a84203e01\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.121786 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpldw\" (UniqueName: \"kubernetes.io/projected/cb8821bc-4ada-47cf-88c5-210a84203e01-kube-api-access-gpldw\") pod \"cert-manager-webhook-f4fb5df64-p48zl\" (UID: \"cb8821bc-4ada-47cf-88c5-210a84203e01\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.223697 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb8821bc-4ada-47cf-88c5-210a84203e01-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-p48zl\" (UID: \"cb8821bc-4ada-47cf-88c5-210a84203e01\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.223763 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpldw\" (UniqueName: \"kubernetes.io/projected/cb8821bc-4ada-47cf-88c5-210a84203e01-kube-api-access-gpldw\") pod \"cert-manager-webhook-f4fb5df64-p48zl\" (UID: \"cb8821bc-4ada-47cf-88c5-210a84203e01\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.245767 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpldw\" (UniqueName: \"kubernetes.io/projected/cb8821bc-4ada-47cf-88c5-210a84203e01-kube-api-access-gpldw\") pod \"cert-manager-webhook-f4fb5df64-p48zl\" (UID: \"cb8821bc-4ada-47cf-88c5-210a84203e01\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.248734 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb8821bc-4ada-47cf-88c5-210a84203e01-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-p48zl\" (UID: \"cb8821bc-4ada-47cf-88c5-210a84203e01\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.431512 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" Dec 02 20:26:10 crc kubenswrapper[4796]: I1202 20:26:10.939671 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-p48zl"] Dec 02 20:26:11 crc kubenswrapper[4796]: I1202 20:26:11.346703 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" event={"ID":"cb8821bc-4ada-47cf-88c5-210a84203e01","Type":"ContainerStarted","Data":"55ecf15887bb3915f3b238acefaa136e0c9944cf30d401f3289a610cb8696702"} Dec 02 20:26:12 crc kubenswrapper[4796]: I1202 20:26:12.585437 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt"] Dec 02 20:26:12 crc kubenswrapper[4796]: I1202 20:26:12.586316 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt" Dec 02 20:26:12 crc kubenswrapper[4796]: I1202 20:26:12.590175 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wbjlb" Dec 02 20:26:12 crc kubenswrapper[4796]: I1202 20:26:12.633059 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt"] Dec 02 20:26:12 crc kubenswrapper[4796]: I1202 20:26:12.664376 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qnl2\" (UniqueName: \"kubernetes.io/projected/e17129f0-a2dc-48f9-94e9-08d6cc73319d-kube-api-access-6qnl2\") pod \"cert-manager-cainjector-855d9ccff4-fwgmt\" (UID: \"e17129f0-a2dc-48f9-94e9-08d6cc73319d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt" Dec 02 20:26:12 crc kubenswrapper[4796]: I1202 20:26:12.664472 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e17129f0-a2dc-48f9-94e9-08d6cc73319d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-fwgmt\" (UID: \"e17129f0-a2dc-48f9-94e9-08d6cc73319d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt" Dec 02 20:26:12 crc kubenswrapper[4796]: I1202 20:26:12.766270 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qnl2\" (UniqueName: \"kubernetes.io/projected/e17129f0-a2dc-48f9-94e9-08d6cc73319d-kube-api-access-6qnl2\") pod \"cert-manager-cainjector-855d9ccff4-fwgmt\" (UID: \"e17129f0-a2dc-48f9-94e9-08d6cc73319d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt" Dec 02 20:26:12 crc kubenswrapper[4796]: I1202 20:26:12.766593 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e17129f0-a2dc-48f9-94e9-08d6cc73319d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-fwgmt\" (UID: \"e17129f0-a2dc-48f9-94e9-08d6cc73319d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt" Dec 02 20:26:12 crc kubenswrapper[4796]: I1202 20:26:12.791593 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qnl2\" (UniqueName: \"kubernetes.io/projected/e17129f0-a2dc-48f9-94e9-08d6cc73319d-kube-api-access-6qnl2\") pod \"cert-manager-cainjector-855d9ccff4-fwgmt\" (UID: \"e17129f0-a2dc-48f9-94e9-08d6cc73319d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt" Dec 02 20:26:12 crc kubenswrapper[4796]: I1202 20:26:12.792650 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e17129f0-a2dc-48f9-94e9-08d6cc73319d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-fwgmt\" (UID: \"e17129f0-a2dc-48f9-94e9-08d6cc73319d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt" Dec 02 20:26:12 crc kubenswrapper[4796]: I1202 20:26:12.905706 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt" Dec 02 20:26:13 crc kubenswrapper[4796]: I1202 20:26:13.444938 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt"] Dec 02 20:26:13 crc kubenswrapper[4796]: W1202 20:26:13.455984 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode17129f0_a2dc_48f9_94e9_08d6cc73319d.slice/crio-ae22369b72e8c3cbeb9e4261298bf9aa8613ed6d0ab396b3ae89e7cf76e5167d WatchSource:0}: Error finding container ae22369b72e8c3cbeb9e4261298bf9aa8613ed6d0ab396b3ae89e7cf76e5167d: Status 404 returned error can't find the container with id ae22369b72e8c3cbeb9e4261298bf9aa8613ed6d0ab396b3ae89e7cf76e5167d Dec 02 20:26:14 crc kubenswrapper[4796]: I1202 20:26:14.392887 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt" event={"ID":"e17129f0-a2dc-48f9-94e9-08d6cc73319d","Type":"ContainerStarted","Data":"ae22369b72e8c3cbeb9e4261298bf9aa8613ed6d0ab396b3ae89e7cf76e5167d"} Dec 02 20:26:20 crc kubenswrapper[4796]: I1202 20:26:20.156607 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4ct7l"] Dec 02 20:26:20 crc kubenswrapper[4796]: I1202 20:26:20.158831 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-4ct7l" Dec 02 20:26:20 crc kubenswrapper[4796]: I1202 20:26:20.161393 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-tmb9m" Dec 02 20:26:20 crc kubenswrapper[4796]: I1202 20:26:20.166549 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4ct7l"] Dec 02 20:26:20 crc kubenswrapper[4796]: I1202 20:26:20.222343 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25f9ec51-75dd-40f8-b598-cfae95b84574-bound-sa-token\") pod \"cert-manager-86cb77c54b-4ct7l\" (UID: \"25f9ec51-75dd-40f8-b598-cfae95b84574\") " pod="cert-manager/cert-manager-86cb77c54b-4ct7l" Dec 02 20:26:20 crc kubenswrapper[4796]: I1202 20:26:20.222403 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chljz\" (UniqueName: \"kubernetes.io/projected/25f9ec51-75dd-40f8-b598-cfae95b84574-kube-api-access-chljz\") pod \"cert-manager-86cb77c54b-4ct7l\" (UID: \"25f9ec51-75dd-40f8-b598-cfae95b84574\") " pod="cert-manager/cert-manager-86cb77c54b-4ct7l" Dec 02 20:26:20 crc kubenswrapper[4796]: I1202 20:26:20.324075 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25f9ec51-75dd-40f8-b598-cfae95b84574-bound-sa-token\") pod \"cert-manager-86cb77c54b-4ct7l\" (UID: \"25f9ec51-75dd-40f8-b598-cfae95b84574\") " pod="cert-manager/cert-manager-86cb77c54b-4ct7l" Dec 02 20:26:20 crc kubenswrapper[4796]: I1202 20:26:20.324121 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chljz\" (UniqueName: \"kubernetes.io/projected/25f9ec51-75dd-40f8-b598-cfae95b84574-kube-api-access-chljz\") pod \"cert-manager-86cb77c54b-4ct7l\" (UID: \"25f9ec51-75dd-40f8-b598-cfae95b84574\") " pod="cert-manager/cert-manager-86cb77c54b-4ct7l" Dec 02 20:26:20 crc kubenswrapper[4796]: I1202 20:26:20.346944 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25f9ec51-75dd-40f8-b598-cfae95b84574-bound-sa-token\") pod \"cert-manager-86cb77c54b-4ct7l\" (UID: \"25f9ec51-75dd-40f8-b598-cfae95b84574\") " pod="cert-manager/cert-manager-86cb77c54b-4ct7l" Dec 02 20:26:20 crc kubenswrapper[4796]: I1202 20:26:20.350619 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chljz\" (UniqueName: \"kubernetes.io/projected/25f9ec51-75dd-40f8-b598-cfae95b84574-kube-api-access-chljz\") pod \"cert-manager-86cb77c54b-4ct7l\" (UID: \"25f9ec51-75dd-40f8-b598-cfae95b84574\") " pod="cert-manager/cert-manager-86cb77c54b-4ct7l" Dec 02 20:26:20 crc kubenswrapper[4796]: I1202 20:26:20.482909 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-4ct7l" Dec 02 20:26:21 crc kubenswrapper[4796]: I1202 20:26:21.003030 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4ct7l"] Dec 02 20:26:21 crc kubenswrapper[4796]: I1202 20:26:21.461163 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt" event={"ID":"e17129f0-a2dc-48f9-94e9-08d6cc73319d","Type":"ContainerStarted","Data":"6615086c0061fb4f0f68ecc018574ae1357223896c48318526efb08e223510c4"} Dec 02 20:26:21 crc kubenswrapper[4796]: I1202 20:26:21.463843 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4ct7l" event={"ID":"25f9ec51-75dd-40f8-b598-cfae95b84574","Type":"ContainerStarted","Data":"4239ba09676363343f70be7bbe70c0a9209b25e2e03c11db3bbcd23bdbaa65fb"} Dec 02 20:26:21 crc kubenswrapper[4796]: I1202 20:26:21.463908 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4ct7l" event={"ID":"25f9ec51-75dd-40f8-b598-cfae95b84574","Type":"ContainerStarted","Data":"5b2bc2fff5ce03f439d8e141ee6788d846bbff97fd855c60f7de0e0f8c0cac1b"} Dec 02 20:26:21 crc kubenswrapper[4796]: I1202 20:26:21.466520 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" event={"ID":"cb8821bc-4ada-47cf-88c5-210a84203e01","Type":"ContainerStarted","Data":"91ee811575adadf04f3636d059f4e725f765428feb1ed68f4312bfbca104c9a3"} Dec 02 20:26:21 crc kubenswrapper[4796]: I1202 20:26:21.466670 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" Dec 02 20:26:21 crc kubenswrapper[4796]: I1202 20:26:21.483795 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fwgmt" podStartSLOduration=2.128256553 podStartE2EDuration="9.483772954s" podCreationTimestamp="2025-12-02 20:26:12 +0000 UTC" firstStartedPulling="2025-12-02 20:26:13.458379649 +0000 UTC m=+856.461755183" lastFinishedPulling="2025-12-02 20:26:20.81389601 +0000 UTC m=+863.817271584" observedRunningTime="2025-12-02 20:26:21.479093452 +0000 UTC m=+864.482468986" watchObservedRunningTime="2025-12-02 20:26:21.483772954 +0000 UTC m=+864.487148488" Dec 02 20:26:21 crc kubenswrapper[4796]: I1202 20:26:21.531479 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" podStartSLOduration=1.694540739 podStartE2EDuration="11.531459637s" podCreationTimestamp="2025-12-02 20:26:10 +0000 UTC" firstStartedPulling="2025-12-02 20:26:10.955295048 +0000 UTC m=+853.958670572" lastFinishedPulling="2025-12-02 20:26:20.792213936 +0000 UTC m=+863.795589470" observedRunningTime="2025-12-02 20:26:21.515324938 +0000 UTC m=+864.518700482" watchObservedRunningTime="2025-12-02 20:26:21.531459637 +0000 UTC m=+864.534835171" Dec 02 20:26:21 crc kubenswrapper[4796]: I1202 20:26:21.533108 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-4ct7l" podStartSLOduration=1.533103227 podStartE2EDuration="1.533103227s" podCreationTimestamp="2025-12-02 20:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:26:21.52948865 +0000 UTC m=+864.532864194" watchObservedRunningTime="2025-12-02 20:26:21.533103227 +0000 UTC m=+864.536478761" Dec 02 20:26:25 crc kubenswrapper[4796]: I1202 20:26:25.435004 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-p48zl" Dec 02 20:26:28 crc kubenswrapper[4796]: I1202 20:26:28.973570 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-k5r79"] Dec 02 20:26:28 crc kubenswrapper[4796]: I1202 20:26:28.975000 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k5r79" Dec 02 20:26:28 crc kubenswrapper[4796]: I1202 20:26:28.980783 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 02 20:26:28 crc kubenswrapper[4796]: I1202 20:26:28.981214 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 02 20:26:28 crc kubenswrapper[4796]: I1202 20:26:28.981421 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kj9wv" Dec 02 20:26:28 crc kubenswrapper[4796]: I1202 20:26:28.993919 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k5r79"] Dec 02 20:26:29 crc kubenswrapper[4796]: I1202 20:26:29.092503 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94j9h\" (UniqueName: \"kubernetes.io/projected/6f9d08d6-21b0-4172-a531-7e3d981ab670-kube-api-access-94j9h\") pod \"openstack-operator-index-k5r79\" (UID: \"6f9d08d6-21b0-4172-a531-7e3d981ab670\") " pod="openstack-operators/openstack-operator-index-k5r79" Dec 02 20:26:29 crc kubenswrapper[4796]: I1202 20:26:29.193383 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94j9h\" (UniqueName: \"kubernetes.io/projected/6f9d08d6-21b0-4172-a531-7e3d981ab670-kube-api-access-94j9h\") pod \"openstack-operator-index-k5r79\" (UID: \"6f9d08d6-21b0-4172-a531-7e3d981ab670\") " pod="openstack-operators/openstack-operator-index-k5r79" Dec 02 20:26:29 crc kubenswrapper[4796]: I1202 20:26:29.229833 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94j9h\" (UniqueName: \"kubernetes.io/projected/6f9d08d6-21b0-4172-a531-7e3d981ab670-kube-api-access-94j9h\") pod \"openstack-operator-index-k5r79\" (UID: \"6f9d08d6-21b0-4172-a531-7e3d981ab670\") " pod="openstack-operators/openstack-operator-index-k5r79" Dec 02 20:26:29 crc kubenswrapper[4796]: I1202 20:26:29.341594 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k5r79" Dec 02 20:26:29 crc kubenswrapper[4796]: I1202 20:26:29.778387 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k5r79"] Dec 02 20:26:29 crc kubenswrapper[4796]: I1202 20:26:29.881490 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k5r79" event={"ID":"6f9d08d6-21b0-4172-a531-7e3d981ab670","Type":"ContainerStarted","Data":"ef668daf26d8c557a7be67b52192d0d8e51ad9298712b595eff586b92a647158"} Dec 02 20:26:31 crc kubenswrapper[4796]: I1202 20:26:31.304684 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-k5r79"] Dec 02 20:26:31 crc kubenswrapper[4796]: I1202 20:26:31.709840 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7mnzm"] Dec 02 20:26:31 crc kubenswrapper[4796]: I1202 20:26:31.710669 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7mnzm" Dec 02 20:26:31 crc kubenswrapper[4796]: I1202 20:26:31.723751 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7mnzm"] Dec 02 20:26:31 crc kubenswrapper[4796]: I1202 20:26:31.853653 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqtzr\" (UniqueName: \"kubernetes.io/projected/212e8079-da95-4bd2-8619-d8a0c239511d-kube-api-access-zqtzr\") pod \"openstack-operator-index-7mnzm\" (UID: \"212e8079-da95-4bd2-8619-d8a0c239511d\") " pod="openstack-operators/openstack-operator-index-7mnzm" Dec 02 20:26:31 crc kubenswrapper[4796]: I1202 20:26:31.955925 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqtzr\" (UniqueName: \"kubernetes.io/projected/212e8079-da95-4bd2-8619-d8a0c239511d-kube-api-access-zqtzr\") pod \"openstack-operator-index-7mnzm\" (UID: \"212e8079-da95-4bd2-8619-d8a0c239511d\") " pod="openstack-operators/openstack-operator-index-7mnzm" Dec 02 20:26:31 crc kubenswrapper[4796]: I1202 20:26:31.979119 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqtzr\" (UniqueName: \"kubernetes.io/projected/212e8079-da95-4bd2-8619-d8a0c239511d-kube-api-access-zqtzr\") pod \"openstack-operator-index-7mnzm\" (UID: \"212e8079-da95-4bd2-8619-d8a0c239511d\") " pod="openstack-operators/openstack-operator-index-7mnzm" Dec 02 20:26:32 crc kubenswrapper[4796]: I1202 20:26:32.040379 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7mnzm" Dec 02 20:26:32 crc kubenswrapper[4796]: I1202 20:26:32.914643 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k5r79" event={"ID":"6f9d08d6-21b0-4172-a531-7e3d981ab670","Type":"ContainerStarted","Data":"5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2"} Dec 02 20:26:32 crc kubenswrapper[4796]: I1202 20:26:32.915053 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-k5r79" podUID="6f9d08d6-21b0-4172-a531-7e3d981ab670" containerName="registry-server" containerID="cri-o://5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2" gracePeriod=2 Dec 02 20:26:32 crc kubenswrapper[4796]: I1202 20:26:32.959178 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-k5r79" podStartSLOduration=1.9877451800000001 podStartE2EDuration="4.959159333s" podCreationTimestamp="2025-12-02 20:26:28 +0000 UTC" firstStartedPulling="2025-12-02 20:26:29.788701778 +0000 UTC m=+872.792077312" lastFinishedPulling="2025-12-02 20:26:32.760115931 +0000 UTC m=+875.763491465" observedRunningTime="2025-12-02 20:26:32.931605386 +0000 UTC m=+875.934980920" watchObservedRunningTime="2025-12-02 20:26:32.959159333 +0000 UTC m=+875.962534857" Dec 02 20:26:32 crc kubenswrapper[4796]: I1202 20:26:32.959557 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7mnzm"] Dec 02 20:26:33 crc kubenswrapper[4796]: W1202 20:26:33.013355 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod212e8079_da95_4bd2_8619_d8a0c239511d.slice/crio-e09acd26bbd35be44b9ca062df5954e195f0af6e0855dcfc45591a7dbf7f5260 WatchSource:0}: Error finding container e09acd26bbd35be44b9ca062df5954e195f0af6e0855dcfc45591a7dbf7f5260: Status 404 returned error can't find the container with id e09acd26bbd35be44b9ca062df5954e195f0af6e0855dcfc45591a7dbf7f5260 Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.317556 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-k5r79_6f9d08d6-21b0-4172-a531-7e3d981ab670/registry-server/0.log" Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.318140 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k5r79" Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.486509 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94j9h\" (UniqueName: \"kubernetes.io/projected/6f9d08d6-21b0-4172-a531-7e3d981ab670-kube-api-access-94j9h\") pod \"6f9d08d6-21b0-4172-a531-7e3d981ab670\" (UID: \"6f9d08d6-21b0-4172-a531-7e3d981ab670\") " Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.496864 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f9d08d6-21b0-4172-a531-7e3d981ab670-kube-api-access-94j9h" (OuterVolumeSpecName: "kube-api-access-94j9h") pod "6f9d08d6-21b0-4172-a531-7e3d981ab670" (UID: "6f9d08d6-21b0-4172-a531-7e3d981ab670"). InnerVolumeSpecName "kube-api-access-94j9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.588186 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94j9h\" (UniqueName: \"kubernetes.io/projected/6f9d08d6-21b0-4172-a531-7e3d981ab670-kube-api-access-94j9h\") on node \"crc\" DevicePath \"\"" Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.929124 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-k5r79_6f9d08d6-21b0-4172-a531-7e3d981ab670/registry-server/0.log" Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.929219 4796 generic.go:334] "Generic (PLEG): container finished" podID="6f9d08d6-21b0-4172-a531-7e3d981ab670" containerID="5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2" exitCode=2 Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.929366 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k5r79" event={"ID":"6f9d08d6-21b0-4172-a531-7e3d981ab670","Type":"ContainerDied","Data":"5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2"} Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.929377 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k5r79" Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.929411 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k5r79" event={"ID":"6f9d08d6-21b0-4172-a531-7e3d981ab670","Type":"ContainerDied","Data":"ef668daf26d8c557a7be67b52192d0d8e51ad9298712b595eff586b92a647158"} Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.929466 4796 scope.go:117] "RemoveContainer" containerID="5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2" Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.932937 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7mnzm" event={"ID":"212e8079-da95-4bd2-8619-d8a0c239511d","Type":"ContainerStarted","Data":"2aab34ec2ae90c846cc4a869be6f4b58a25857b3eff8e38984481d2b600de9a2"} Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.933012 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7mnzm" event={"ID":"212e8079-da95-4bd2-8619-d8a0c239511d","Type":"ContainerStarted","Data":"e09acd26bbd35be44b9ca062df5954e195f0af6e0855dcfc45591a7dbf7f5260"} Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.962465 4796 scope.go:117] "RemoveContainer" containerID="5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2" Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.965080 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7mnzm" podStartSLOduration=2.906758811 podStartE2EDuration="2.96506216s" podCreationTimestamp="2025-12-02 20:26:31 +0000 UTC" firstStartedPulling="2025-12-02 20:26:33.017722739 +0000 UTC m=+876.021098283" lastFinishedPulling="2025-12-02 20:26:33.076026098 +0000 UTC m=+876.079401632" observedRunningTime="2025-12-02 20:26:33.962925299 +0000 UTC m=+876.966300833" watchObservedRunningTime="2025-12-02 20:26:33.96506216 +0000 UTC m=+876.968437694" Dec 02 20:26:33 crc kubenswrapper[4796]: E1202 20:26:33.965540 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2\": container with ID starting with 5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2 not found: ID does not exist" containerID="5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2" Dec 02 20:26:33 crc kubenswrapper[4796]: I1202 20:26:33.965584 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2"} err="failed to get container status \"5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2\": rpc error: code = NotFound desc = could not find container \"5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2\": container with ID starting with 5c43001bbed64dc9d33d2c9a0e6ce2d5208e928166e9278c5b9e53d1650835d2 not found: ID does not exist" Dec 02 20:26:34 crc kubenswrapper[4796]: I1202 20:26:34.007374 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-k5r79"] Dec 02 20:26:34 crc kubenswrapper[4796]: I1202 20:26:34.019186 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-k5r79"] Dec 02 20:26:35 crc kubenswrapper[4796]: I1202 20:26:35.285796 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f9d08d6-21b0-4172-a531-7e3d981ab670" path="/var/lib/kubelet/pods/6f9d08d6-21b0-4172-a531-7e3d981ab670/volumes" Dec 02 20:26:42 crc kubenswrapper[4796]: I1202 20:26:42.040982 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7mnzm" Dec 02 20:26:42 crc kubenswrapper[4796]: I1202 20:26:42.041754 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7mnzm" Dec 02 20:26:42 crc kubenswrapper[4796]: I1202 20:26:42.092146 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7mnzm" Dec 02 20:26:43 crc kubenswrapper[4796]: I1202 20:26:43.117412 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7mnzm" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.189976 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.190710 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.781551 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx"] Dec 02 20:26:55 crc kubenswrapper[4796]: E1202 20:26:55.782012 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9d08d6-21b0-4172-a531-7e3d981ab670" containerName="registry-server" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.782042 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9d08d6-21b0-4172-a531-7e3d981ab670" containerName="registry-server" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.782630 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f9d08d6-21b0-4172-a531-7e3d981ab670" containerName="registry-server" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.784426 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.787351 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-dp882" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.793201 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx"] Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.797519 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-bundle\") pod \"10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx\" (UID: \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\") " pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.797631 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-util\") pod \"10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx\" (UID: \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\") " pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.797676 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlplf\" (UniqueName: \"kubernetes.io/projected/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-kube-api-access-rlplf\") pod \"10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx\" (UID: \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\") " pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.899610 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-util\") pod \"10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx\" (UID: \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\") " pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.899699 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlplf\" (UniqueName: \"kubernetes.io/projected/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-kube-api-access-rlplf\") pod \"10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx\" (UID: \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\") " pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.899840 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-bundle\") pod \"10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx\" (UID: \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\") " pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.900645 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-bundle\") pod \"10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx\" (UID: \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\") " pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.901141 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-util\") pod \"10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx\" (UID: \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\") " pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:26:55 crc kubenswrapper[4796]: I1202 20:26:55.949162 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlplf\" (UniqueName: \"kubernetes.io/projected/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-kube-api-access-rlplf\") pod \"10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx\" (UID: \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\") " pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:26:56 crc kubenswrapper[4796]: I1202 20:26:56.110349 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:26:56 crc kubenswrapper[4796]: I1202 20:26:56.629307 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx"] Dec 02 20:26:57 crc kubenswrapper[4796]: I1202 20:26:57.145810 4796 generic.go:334] "Generic (PLEG): container finished" podID="819b363e-8c48-4778-9f8f-b37c1e2e7bd9" containerID="8eb306f8da0a6753cac0ec9f47ba5294ba6eb59ed5dd34e965504f50e9aea794" exitCode=0 Dec 02 20:26:57 crc kubenswrapper[4796]: I1202 20:26:57.145860 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" event={"ID":"819b363e-8c48-4778-9f8f-b37c1e2e7bd9","Type":"ContainerDied","Data":"8eb306f8da0a6753cac0ec9f47ba5294ba6eb59ed5dd34e965504f50e9aea794"} Dec 02 20:26:57 crc kubenswrapper[4796]: I1202 20:26:57.145894 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" event={"ID":"819b363e-8c48-4778-9f8f-b37c1e2e7bd9","Type":"ContainerStarted","Data":"76b74344afac53a3ba5f9f548e17ade5e7b6d05ae40b932cd1d47b396e2729c8"} Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.159111 4796 generic.go:334] "Generic (PLEG): container finished" podID="819b363e-8c48-4778-9f8f-b37c1e2e7bd9" containerID="2c0a604349c58477ac71fe09387c13b17d17c7d9014e7b8e5353627949edb67b" exitCode=0 Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.159306 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" event={"ID":"819b363e-8c48-4778-9f8f-b37c1e2e7bd9","Type":"ContainerDied","Data":"2c0a604349c58477ac71fe09387c13b17d17c7d9014e7b8e5353627949edb67b"} Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.734822 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-98ww8"] Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.738575 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.744786 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ffe86a1-fd84-4431-837d-5469fe1f977a-catalog-content\") pod \"community-operators-98ww8\" (UID: \"9ffe86a1-fd84-4431-837d-5469fe1f977a\") " pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.745240 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4q5j\" (UniqueName: \"kubernetes.io/projected/9ffe86a1-fd84-4431-837d-5469fe1f977a-kube-api-access-m4q5j\") pod \"community-operators-98ww8\" (UID: \"9ffe86a1-fd84-4431-837d-5469fe1f977a\") " pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.745478 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ffe86a1-fd84-4431-837d-5469fe1f977a-utilities\") pod \"community-operators-98ww8\" (UID: \"9ffe86a1-fd84-4431-837d-5469fe1f977a\") " pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.808803 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98ww8"] Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.846104 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4q5j\" (UniqueName: \"kubernetes.io/projected/9ffe86a1-fd84-4431-837d-5469fe1f977a-kube-api-access-m4q5j\") pod \"community-operators-98ww8\" (UID: \"9ffe86a1-fd84-4431-837d-5469fe1f977a\") " pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.846474 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ffe86a1-fd84-4431-837d-5469fe1f977a-utilities\") pod \"community-operators-98ww8\" (UID: \"9ffe86a1-fd84-4431-837d-5469fe1f977a\") " pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.846613 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ffe86a1-fd84-4431-837d-5469fe1f977a-catalog-content\") pod \"community-operators-98ww8\" (UID: \"9ffe86a1-fd84-4431-837d-5469fe1f977a\") " pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.847235 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ffe86a1-fd84-4431-837d-5469fe1f977a-utilities\") pod \"community-operators-98ww8\" (UID: \"9ffe86a1-fd84-4431-837d-5469fe1f977a\") " pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.847344 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ffe86a1-fd84-4431-837d-5469fe1f977a-catalog-content\") pod \"community-operators-98ww8\" (UID: \"9ffe86a1-fd84-4431-837d-5469fe1f977a\") " pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:26:58 crc kubenswrapper[4796]: I1202 20:26:58.871905 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4q5j\" (UniqueName: \"kubernetes.io/projected/9ffe86a1-fd84-4431-837d-5469fe1f977a-kube-api-access-m4q5j\") pod \"community-operators-98ww8\" (UID: \"9ffe86a1-fd84-4431-837d-5469fe1f977a\") " pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:26:59 crc kubenswrapper[4796]: I1202 20:26:59.108246 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:26:59 crc kubenswrapper[4796]: I1202 20:26:59.188505 4796 generic.go:334] "Generic (PLEG): container finished" podID="819b363e-8c48-4778-9f8f-b37c1e2e7bd9" containerID="f78655bfff7fd1558445f46fe4f485ac43e2c42a9b87f22355644ce9c998ee4c" exitCode=0 Dec 02 20:26:59 crc kubenswrapper[4796]: I1202 20:26:59.188595 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" event={"ID":"819b363e-8c48-4778-9f8f-b37c1e2e7bd9","Type":"ContainerDied","Data":"f78655bfff7fd1558445f46fe4f485ac43e2c42a9b87f22355644ce9c998ee4c"} Dec 02 20:26:59 crc kubenswrapper[4796]: I1202 20:26:59.503607 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98ww8"] Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.200351 4796 generic.go:334] "Generic (PLEG): container finished" podID="9ffe86a1-fd84-4431-837d-5469fe1f977a" containerID="b5e53acee2f56aa2c39fcc483c816778262dfccb50ec302d2eaeffef48fb2ce3" exitCode=0 Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.200464 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98ww8" event={"ID":"9ffe86a1-fd84-4431-837d-5469fe1f977a","Type":"ContainerDied","Data":"b5e53acee2f56aa2c39fcc483c816778262dfccb50ec302d2eaeffef48fb2ce3"} Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.200848 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98ww8" event={"ID":"9ffe86a1-fd84-4431-837d-5469fe1f977a","Type":"ContainerStarted","Data":"a67f3fdadfdd4dbbeb895f15b33536b841164338c20a42b21f536d590f135f9f"} Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.556827 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.592643 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlplf\" (UniqueName: \"kubernetes.io/projected/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-kube-api-access-rlplf\") pod \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\" (UID: \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\") " Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.592793 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-bundle\") pod \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\" (UID: \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\") " Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.592907 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-util\") pod \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\" (UID: \"819b363e-8c48-4778-9f8f-b37c1e2e7bd9\") " Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.594195 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-bundle" (OuterVolumeSpecName: "bundle") pod "819b363e-8c48-4778-9f8f-b37c1e2e7bd9" (UID: "819b363e-8c48-4778-9f8f-b37c1e2e7bd9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.599846 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-kube-api-access-rlplf" (OuterVolumeSpecName: "kube-api-access-rlplf") pod "819b363e-8c48-4778-9f8f-b37c1e2e7bd9" (UID: "819b363e-8c48-4778-9f8f-b37c1e2e7bd9"). InnerVolumeSpecName "kube-api-access-rlplf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.622676 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-util" (OuterVolumeSpecName: "util") pod "819b363e-8c48-4778-9f8f-b37c1e2e7bd9" (UID: "819b363e-8c48-4778-9f8f-b37c1e2e7bd9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.694614 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.694672 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-util\") on node \"crc\" DevicePath \"\"" Dec 02 20:27:00 crc kubenswrapper[4796]: I1202 20:27:00.694690 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlplf\" (UniqueName: \"kubernetes.io/projected/819b363e-8c48-4778-9f8f-b37c1e2e7bd9-kube-api-access-rlplf\") on node \"crc\" DevicePath \"\"" Dec 02 20:27:01 crc kubenswrapper[4796]: I1202 20:27:01.213017 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" event={"ID":"819b363e-8c48-4778-9f8f-b37c1e2e7bd9","Type":"ContainerDied","Data":"76b74344afac53a3ba5f9f548e17ade5e7b6d05ae40b932cd1d47b396e2729c8"} Dec 02 20:27:01 crc kubenswrapper[4796]: I1202 20:27:01.213522 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76b74344afac53a3ba5f9f548e17ade5e7b6d05ae40b932cd1d47b396e2729c8" Dec 02 20:27:01 crc kubenswrapper[4796]: I1202 20:27:01.213037 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx" Dec 02 20:27:01 crc kubenswrapper[4796]: I1202 20:27:01.215487 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98ww8" event={"ID":"9ffe86a1-fd84-4431-837d-5469fe1f977a","Type":"ContainerStarted","Data":"4c5cae30116d8f33e8a66bbffbde66a2a4ec06320a6c8bab639704d460e4b105"} Dec 02 20:27:02 crc kubenswrapper[4796]: I1202 20:27:02.225532 4796 generic.go:334] "Generic (PLEG): container finished" podID="9ffe86a1-fd84-4431-837d-5469fe1f977a" containerID="4c5cae30116d8f33e8a66bbffbde66a2a4ec06320a6c8bab639704d460e4b105" exitCode=0 Dec 02 20:27:02 crc kubenswrapper[4796]: I1202 20:27:02.225577 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98ww8" event={"ID":"9ffe86a1-fd84-4431-837d-5469fe1f977a","Type":"ContainerDied","Data":"4c5cae30116d8f33e8a66bbffbde66a2a4ec06320a6c8bab639704d460e4b105"} Dec 02 20:27:03 crc kubenswrapper[4796]: I1202 20:27:03.239812 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98ww8" event={"ID":"9ffe86a1-fd84-4431-837d-5469fe1f977a","Type":"ContainerStarted","Data":"6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531"} Dec 02 20:27:03 crc kubenswrapper[4796]: I1202 20:27:03.271543 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-98ww8" podStartSLOduration=2.465327038 podStartE2EDuration="5.271517458s" podCreationTimestamp="2025-12-02 20:26:58 +0000 UTC" firstStartedPulling="2025-12-02 20:27:00.203311943 +0000 UTC m=+903.206687517" lastFinishedPulling="2025-12-02 20:27:03.009502373 +0000 UTC m=+906.012877937" observedRunningTime="2025-12-02 20:27:03.269513559 +0000 UTC m=+906.272889103" watchObservedRunningTime="2025-12-02 20:27:03.271517458 +0000 UTC m=+906.274893032" Dec 02 20:27:06 crc kubenswrapper[4796]: I1202 20:27:06.455629 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7"] Dec 02 20:27:06 crc kubenswrapper[4796]: E1202 20:27:06.456764 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819b363e-8c48-4778-9f8f-b37c1e2e7bd9" containerName="util" Dec 02 20:27:06 crc kubenswrapper[4796]: I1202 20:27:06.456778 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="819b363e-8c48-4778-9f8f-b37c1e2e7bd9" containerName="util" Dec 02 20:27:06 crc kubenswrapper[4796]: E1202 20:27:06.456801 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819b363e-8c48-4778-9f8f-b37c1e2e7bd9" containerName="extract" Dec 02 20:27:06 crc kubenswrapper[4796]: I1202 20:27:06.456810 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="819b363e-8c48-4778-9f8f-b37c1e2e7bd9" containerName="extract" Dec 02 20:27:06 crc kubenswrapper[4796]: E1202 20:27:06.456823 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819b363e-8c48-4778-9f8f-b37c1e2e7bd9" containerName="pull" Dec 02 20:27:06 crc kubenswrapper[4796]: I1202 20:27:06.456829 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="819b363e-8c48-4778-9f8f-b37c1e2e7bd9" containerName="pull" Dec 02 20:27:06 crc kubenswrapper[4796]: I1202 20:27:06.456954 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="819b363e-8c48-4778-9f8f-b37c1e2e7bd9" containerName="extract" Dec 02 20:27:06 crc kubenswrapper[4796]: I1202 20:27:06.457536 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" Dec 02 20:27:06 crc kubenswrapper[4796]: I1202 20:27:06.461753 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-g8lwr" Dec 02 20:27:06 crc kubenswrapper[4796]: I1202 20:27:06.488842 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vrwb\" (UniqueName: \"kubernetes.io/projected/fcd27a49-b8b6-4635-bd73-f7caa2d3785a-kube-api-access-5vrwb\") pod \"openstack-operator-controller-operator-7f7b957f48-xsfs7\" (UID: \"fcd27a49-b8b6-4635-bd73-f7caa2d3785a\") " pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" Dec 02 20:27:06 crc kubenswrapper[4796]: I1202 20:27:06.491495 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7"] Dec 02 20:27:06 crc kubenswrapper[4796]: I1202 20:27:06.591075 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vrwb\" (UniqueName: \"kubernetes.io/projected/fcd27a49-b8b6-4635-bd73-f7caa2d3785a-kube-api-access-5vrwb\") pod \"openstack-operator-controller-operator-7f7b957f48-xsfs7\" (UID: \"fcd27a49-b8b6-4635-bd73-f7caa2d3785a\") " pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" Dec 02 20:27:06 crc kubenswrapper[4796]: I1202 20:27:06.619427 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vrwb\" (UniqueName: \"kubernetes.io/projected/fcd27a49-b8b6-4635-bd73-f7caa2d3785a-kube-api-access-5vrwb\") pod \"openstack-operator-controller-operator-7f7b957f48-xsfs7\" (UID: \"fcd27a49-b8b6-4635-bd73-f7caa2d3785a\") " pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" Dec 02 20:27:06 crc kubenswrapper[4796]: I1202 20:27:06.779071 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" Dec 02 20:27:07 crc kubenswrapper[4796]: I1202 20:27:07.174452 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7"] Dec 02 20:27:07 crc kubenswrapper[4796]: W1202 20:27:07.187535 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcd27a49_b8b6_4635_bd73_f7caa2d3785a.slice/crio-b0bdabdb8924e0e6ce2e21ee05091fdf705b0ae1779c896a73d6df60d5a98960 WatchSource:0}: Error finding container b0bdabdb8924e0e6ce2e21ee05091fdf705b0ae1779c896a73d6df60d5a98960: Status 404 returned error can't find the container with id b0bdabdb8924e0e6ce2e21ee05091fdf705b0ae1779c896a73d6df60d5a98960 Dec 02 20:27:07 crc kubenswrapper[4796]: I1202 20:27:07.277069 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" event={"ID":"fcd27a49-b8b6-4635-bd73-f7caa2d3785a","Type":"ContainerStarted","Data":"b0bdabdb8924e0e6ce2e21ee05091fdf705b0ae1779c896a73d6df60d5a98960"} Dec 02 20:27:09 crc kubenswrapper[4796]: I1202 20:27:09.109099 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:27:09 crc kubenswrapper[4796]: I1202 20:27:09.109186 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:27:09 crc kubenswrapper[4796]: I1202 20:27:09.202779 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:27:09 crc kubenswrapper[4796]: I1202 20:27:09.336544 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:27:11 crc kubenswrapper[4796]: I1202 20:27:11.521918 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98ww8"] Dec 02 20:27:11 crc kubenswrapper[4796]: I1202 20:27:11.522748 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-98ww8" podUID="9ffe86a1-fd84-4431-837d-5469fe1f977a" containerName="registry-server" containerID="cri-o://6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531" gracePeriod=2 Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.143751 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.289518 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ffe86a1-fd84-4431-837d-5469fe1f977a-utilities\") pod \"9ffe86a1-fd84-4431-837d-5469fe1f977a\" (UID: \"9ffe86a1-fd84-4431-837d-5469fe1f977a\") " Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.289601 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ffe86a1-fd84-4431-837d-5469fe1f977a-catalog-content\") pod \"9ffe86a1-fd84-4431-837d-5469fe1f977a\" (UID: \"9ffe86a1-fd84-4431-837d-5469fe1f977a\") " Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.289634 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4q5j\" (UniqueName: \"kubernetes.io/projected/9ffe86a1-fd84-4431-837d-5469fe1f977a-kube-api-access-m4q5j\") pod \"9ffe86a1-fd84-4431-837d-5469fe1f977a\" (UID: \"9ffe86a1-fd84-4431-837d-5469fe1f977a\") " Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.291080 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ffe86a1-fd84-4431-837d-5469fe1f977a-utilities" (OuterVolumeSpecName: "utilities") pod "9ffe86a1-fd84-4431-837d-5469fe1f977a" (UID: "9ffe86a1-fd84-4431-837d-5469fe1f977a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.296616 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ffe86a1-fd84-4431-837d-5469fe1f977a-kube-api-access-m4q5j" (OuterVolumeSpecName: "kube-api-access-m4q5j") pod "9ffe86a1-fd84-4431-837d-5469fe1f977a" (UID: "9ffe86a1-fd84-4431-837d-5469fe1f977a"). InnerVolumeSpecName "kube-api-access-m4q5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.319573 4796 generic.go:334] "Generic (PLEG): container finished" podID="9ffe86a1-fd84-4431-837d-5469fe1f977a" containerID="6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531" exitCode=0 Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.319914 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98ww8" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.320145 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98ww8" event={"ID":"9ffe86a1-fd84-4431-837d-5469fe1f977a","Type":"ContainerDied","Data":"6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531"} Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.320231 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98ww8" event={"ID":"9ffe86a1-fd84-4431-837d-5469fe1f977a","Type":"ContainerDied","Data":"a67f3fdadfdd4dbbeb895f15b33536b841164338c20a42b21f536d590f135f9f"} Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.320288 4796 scope.go:117] "RemoveContainer" containerID="6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.345008 4796 scope.go:117] "RemoveContainer" containerID="4c5cae30116d8f33e8a66bbffbde66a2a4ec06320a6c8bab639704d460e4b105" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.359655 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ffe86a1-fd84-4431-837d-5469fe1f977a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ffe86a1-fd84-4431-837d-5469fe1f977a" (UID: "9ffe86a1-fd84-4431-837d-5469fe1f977a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.370408 4796 scope.go:117] "RemoveContainer" containerID="b5e53acee2f56aa2c39fcc483c816778262dfccb50ec302d2eaeffef48fb2ce3" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.388980 4796 scope.go:117] "RemoveContainer" containerID="6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531" Dec 02 20:27:12 crc kubenswrapper[4796]: E1202 20:27:12.393651 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531\": container with ID starting with 6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531 not found: ID does not exist" containerID="6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.393699 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531"} err="failed to get container status \"6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531\": rpc error: code = NotFound desc = could not find container \"6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531\": container with ID starting with 6f0c487e07ada40007be28410d816455abae50b1c550981d0f3fb682e203b531 not found: ID does not exist" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.393727 4796 scope.go:117] "RemoveContainer" containerID="4c5cae30116d8f33e8a66bbffbde66a2a4ec06320a6c8bab639704d460e4b105" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.394602 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ffe86a1-fd84-4431-837d-5469fe1f977a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.394633 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ffe86a1-fd84-4431-837d-5469fe1f977a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.394643 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4q5j\" (UniqueName: \"kubernetes.io/projected/9ffe86a1-fd84-4431-837d-5469fe1f977a-kube-api-access-m4q5j\") on node \"crc\" DevicePath \"\"" Dec 02 20:27:12 crc kubenswrapper[4796]: E1202 20:27:12.395819 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5cae30116d8f33e8a66bbffbde66a2a4ec06320a6c8bab639704d460e4b105\": container with ID starting with 4c5cae30116d8f33e8a66bbffbde66a2a4ec06320a6c8bab639704d460e4b105 not found: ID does not exist" containerID="4c5cae30116d8f33e8a66bbffbde66a2a4ec06320a6c8bab639704d460e4b105" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.395843 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5cae30116d8f33e8a66bbffbde66a2a4ec06320a6c8bab639704d460e4b105"} err="failed to get container status \"4c5cae30116d8f33e8a66bbffbde66a2a4ec06320a6c8bab639704d460e4b105\": rpc error: code = NotFound desc = could not find container \"4c5cae30116d8f33e8a66bbffbde66a2a4ec06320a6c8bab639704d460e4b105\": container with ID starting with 4c5cae30116d8f33e8a66bbffbde66a2a4ec06320a6c8bab639704d460e4b105 not found: ID does not exist" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.395856 4796 scope.go:117] "RemoveContainer" containerID="b5e53acee2f56aa2c39fcc483c816778262dfccb50ec302d2eaeffef48fb2ce3" Dec 02 20:27:12 crc kubenswrapper[4796]: E1202 20:27:12.396342 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5e53acee2f56aa2c39fcc483c816778262dfccb50ec302d2eaeffef48fb2ce3\": container with ID starting with b5e53acee2f56aa2c39fcc483c816778262dfccb50ec302d2eaeffef48fb2ce3 not found: ID does not exist" containerID="b5e53acee2f56aa2c39fcc483c816778262dfccb50ec302d2eaeffef48fb2ce3" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.396363 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e53acee2f56aa2c39fcc483c816778262dfccb50ec302d2eaeffef48fb2ce3"} err="failed to get container status \"b5e53acee2f56aa2c39fcc483c816778262dfccb50ec302d2eaeffef48fb2ce3\": rpc error: code = NotFound desc = could not find container \"b5e53acee2f56aa2c39fcc483c816778262dfccb50ec302d2eaeffef48fb2ce3\": container with ID starting with b5e53acee2f56aa2c39fcc483c816778262dfccb50ec302d2eaeffef48fb2ce3 not found: ID does not exist" Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.651470 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98ww8"] Dec 02 20:27:12 crc kubenswrapper[4796]: I1202 20:27:12.655442 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-98ww8"] Dec 02 20:27:13 crc kubenswrapper[4796]: I1202 20:27:13.274757 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ffe86a1-fd84-4431-837d-5469fe1f977a" path="/var/lib/kubelet/pods/9ffe86a1-fd84-4431-837d-5469fe1f977a/volumes" Dec 02 20:27:13 crc kubenswrapper[4796]: I1202 20:27:13.338041 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" event={"ID":"fcd27a49-b8b6-4635-bd73-f7caa2d3785a","Type":"ContainerStarted","Data":"8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577"} Dec 02 20:27:13 crc kubenswrapper[4796]: I1202 20:27:13.338151 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" Dec 02 20:27:13 crc kubenswrapper[4796]: I1202 20:27:13.374827 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" podStartSLOduration=2.407304635 podStartE2EDuration="7.374729643s" podCreationTimestamp="2025-12-02 20:27:06 +0000 UTC" firstStartedPulling="2025-12-02 20:27:07.189649599 +0000 UTC m=+910.193025133" lastFinishedPulling="2025-12-02 20:27:12.157074577 +0000 UTC m=+915.160450141" observedRunningTime="2025-12-02 20:27:13.3671129 +0000 UTC m=+916.370488454" watchObservedRunningTime="2025-12-02 20:27:13.374729643 +0000 UTC m=+916.378105177" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.196767 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fq7rq"] Dec 02 20:27:19 crc kubenswrapper[4796]: E1202 20:27:19.197808 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ffe86a1-fd84-4431-837d-5469fe1f977a" containerName="extract-content" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.197830 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ffe86a1-fd84-4431-837d-5469fe1f977a" containerName="extract-content" Dec 02 20:27:19 crc kubenswrapper[4796]: E1202 20:27:19.197851 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ffe86a1-fd84-4431-837d-5469fe1f977a" containerName="extract-utilities" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.197864 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ffe86a1-fd84-4431-837d-5469fe1f977a" containerName="extract-utilities" Dec 02 20:27:19 crc kubenswrapper[4796]: E1202 20:27:19.197888 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ffe86a1-fd84-4431-837d-5469fe1f977a" containerName="registry-server" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.197902 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ffe86a1-fd84-4431-837d-5469fe1f977a" containerName="registry-server" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.198118 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ffe86a1-fd84-4431-837d-5469fe1f977a" containerName="registry-server" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.200224 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.217054 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq7rq"] Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.313874 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2prj\" (UniqueName: \"kubernetes.io/projected/dc8abd34-87ad-4eea-adeb-4085a819eeb0-kube-api-access-z2prj\") pod \"redhat-marketplace-fq7rq\" (UID: \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\") " pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.313964 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc8abd34-87ad-4eea-adeb-4085a819eeb0-catalog-content\") pod \"redhat-marketplace-fq7rq\" (UID: \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\") " pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.314098 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc8abd34-87ad-4eea-adeb-4085a819eeb0-utilities\") pod \"redhat-marketplace-fq7rq\" (UID: \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\") " pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.426850 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2prj\" (UniqueName: \"kubernetes.io/projected/dc8abd34-87ad-4eea-adeb-4085a819eeb0-kube-api-access-z2prj\") pod \"redhat-marketplace-fq7rq\" (UID: \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\") " pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.426990 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc8abd34-87ad-4eea-adeb-4085a819eeb0-catalog-content\") pod \"redhat-marketplace-fq7rq\" (UID: \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\") " pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.427274 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc8abd34-87ad-4eea-adeb-4085a819eeb0-utilities\") pod \"redhat-marketplace-fq7rq\" (UID: \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\") " pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.428100 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc8abd34-87ad-4eea-adeb-4085a819eeb0-utilities\") pod \"redhat-marketplace-fq7rq\" (UID: \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\") " pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.428543 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc8abd34-87ad-4eea-adeb-4085a819eeb0-catalog-content\") pod \"redhat-marketplace-fq7rq\" (UID: \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\") " pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.478613 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2prj\" (UniqueName: \"kubernetes.io/projected/dc8abd34-87ad-4eea-adeb-4085a819eeb0-kube-api-access-z2prj\") pod \"redhat-marketplace-fq7rq\" (UID: \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\") " pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.564980 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:19 crc kubenswrapper[4796]: I1202 20:27:19.810289 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq7rq"] Dec 02 20:27:20 crc kubenswrapper[4796]: I1202 20:27:20.439234 4796 generic.go:334] "Generic (PLEG): container finished" podID="dc8abd34-87ad-4eea-adeb-4085a819eeb0" containerID="aaf021f18c52d90910196503289ae8f63bc4159a498f8dfe2a38980def3b29dd" exitCode=0 Dec 02 20:27:20 crc kubenswrapper[4796]: I1202 20:27:20.439318 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq7rq" event={"ID":"dc8abd34-87ad-4eea-adeb-4085a819eeb0","Type":"ContainerDied","Data":"aaf021f18c52d90910196503289ae8f63bc4159a498f8dfe2a38980def3b29dd"} Dec 02 20:27:20 crc kubenswrapper[4796]: I1202 20:27:20.439401 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq7rq" event={"ID":"dc8abd34-87ad-4eea-adeb-4085a819eeb0","Type":"ContainerStarted","Data":"3201990c9e50ee7ca3776f2a7a607dad71342bd3f4006db847923cd9e1864376"} Dec 02 20:27:21 crc kubenswrapper[4796]: I1202 20:27:21.449050 4796 generic.go:334] "Generic (PLEG): container finished" podID="dc8abd34-87ad-4eea-adeb-4085a819eeb0" containerID="78dcfcdad9df1091a83bae6d14c1793804e2ccdb7620f547ce4c991a7ff6a6e1" exitCode=0 Dec 02 20:27:21 crc kubenswrapper[4796]: I1202 20:27:21.449444 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq7rq" event={"ID":"dc8abd34-87ad-4eea-adeb-4085a819eeb0","Type":"ContainerDied","Data":"78dcfcdad9df1091a83bae6d14c1793804e2ccdb7620f547ce4c991a7ff6a6e1"} Dec 02 20:27:22 crc kubenswrapper[4796]: I1202 20:27:22.464370 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq7rq" event={"ID":"dc8abd34-87ad-4eea-adeb-4085a819eeb0","Type":"ContainerStarted","Data":"23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159"} Dec 02 20:27:22 crc kubenswrapper[4796]: I1202 20:27:22.493738 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fq7rq" podStartSLOduration=2.005383685 podStartE2EDuration="3.493713076s" podCreationTimestamp="2025-12-02 20:27:19 +0000 UTC" firstStartedPulling="2025-12-02 20:27:20.444904966 +0000 UTC m=+923.448280500" lastFinishedPulling="2025-12-02 20:27:21.933234347 +0000 UTC m=+924.936609891" observedRunningTime="2025-12-02 20:27:22.490092099 +0000 UTC m=+925.493467673" watchObservedRunningTime="2025-12-02 20:27:22.493713076 +0000 UTC m=+925.497088640" Dec 02 20:27:25 crc kubenswrapper[4796]: I1202 20:27:25.189682 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:27:25 crc kubenswrapper[4796]: I1202 20:27:25.190362 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:27:26 crc kubenswrapper[4796]: I1202 20:27:26.782187 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" Dec 02 20:27:29 crc kubenswrapper[4796]: I1202 20:27:29.565919 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:29 crc kubenswrapper[4796]: I1202 20:27:29.566971 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:29 crc kubenswrapper[4796]: I1202 20:27:29.629053 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:30 crc kubenswrapper[4796]: I1202 20:27:30.664432 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:30 crc kubenswrapper[4796]: I1202 20:27:30.727341 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq7rq"] Dec 02 20:27:32 crc kubenswrapper[4796]: I1202 20:27:32.591838 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fq7rq" podUID="dc8abd34-87ad-4eea-adeb-4085a819eeb0" containerName="registry-server" containerID="cri-o://23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159" gracePeriod=2 Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.084298 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.242208 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc8abd34-87ad-4eea-adeb-4085a819eeb0-utilities\") pod \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\" (UID: \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\") " Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.242534 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2prj\" (UniqueName: \"kubernetes.io/projected/dc8abd34-87ad-4eea-adeb-4085a819eeb0-kube-api-access-z2prj\") pod \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\" (UID: \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\") " Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.242644 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc8abd34-87ad-4eea-adeb-4085a819eeb0-catalog-content\") pod \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\" (UID: \"dc8abd34-87ad-4eea-adeb-4085a819eeb0\") " Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.243145 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc8abd34-87ad-4eea-adeb-4085a819eeb0-utilities" (OuterVolumeSpecName: "utilities") pod "dc8abd34-87ad-4eea-adeb-4085a819eeb0" (UID: "dc8abd34-87ad-4eea-adeb-4085a819eeb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.261542 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8abd34-87ad-4eea-adeb-4085a819eeb0-kube-api-access-z2prj" (OuterVolumeSpecName: "kube-api-access-z2prj") pod "dc8abd34-87ad-4eea-adeb-4085a819eeb0" (UID: "dc8abd34-87ad-4eea-adeb-4085a819eeb0"). InnerVolumeSpecName "kube-api-access-z2prj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.267345 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc8abd34-87ad-4eea-adeb-4085a819eeb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc8abd34-87ad-4eea-adeb-4085a819eeb0" (UID: "dc8abd34-87ad-4eea-adeb-4085a819eeb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.343961 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc8abd34-87ad-4eea-adeb-4085a819eeb0-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.344499 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2prj\" (UniqueName: \"kubernetes.io/projected/dc8abd34-87ad-4eea-adeb-4085a819eeb0-kube-api-access-z2prj\") on node \"crc\" DevicePath \"\"" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.344516 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc8abd34-87ad-4eea-adeb-4085a819eeb0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.600554 4796 generic.go:334] "Generic (PLEG): container finished" podID="dc8abd34-87ad-4eea-adeb-4085a819eeb0" containerID="23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159" exitCode=0 Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.600631 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fq7rq" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.600651 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq7rq" event={"ID":"dc8abd34-87ad-4eea-adeb-4085a819eeb0","Type":"ContainerDied","Data":"23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159"} Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.601705 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq7rq" event={"ID":"dc8abd34-87ad-4eea-adeb-4085a819eeb0","Type":"ContainerDied","Data":"3201990c9e50ee7ca3776f2a7a607dad71342bd3f4006db847923cd9e1864376"} Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.601730 4796 scope.go:117] "RemoveContainer" containerID="23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.623567 4796 scope.go:117] "RemoveContainer" containerID="78dcfcdad9df1091a83bae6d14c1793804e2ccdb7620f547ce4c991a7ff6a6e1" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.637991 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq7rq"] Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.646685 4796 scope.go:117] "RemoveContainer" containerID="aaf021f18c52d90910196503289ae8f63bc4159a498f8dfe2a38980def3b29dd" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.659173 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq7rq"] Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.682035 4796 scope.go:117] "RemoveContainer" containerID="23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159" Dec 02 20:27:33 crc kubenswrapper[4796]: E1202 20:27:33.682702 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159\": container with ID starting with 23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159 not found: ID does not exist" containerID="23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.682762 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159"} err="failed to get container status \"23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159\": rpc error: code = NotFound desc = could not find container \"23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159\": container with ID starting with 23d2824518f85c5f9cc91951d15fe4c78d63ec1fc04d204c6c663dd54cfd1159 not found: ID does not exist" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.682801 4796 scope.go:117] "RemoveContainer" containerID="78dcfcdad9df1091a83bae6d14c1793804e2ccdb7620f547ce4c991a7ff6a6e1" Dec 02 20:27:33 crc kubenswrapper[4796]: E1202 20:27:33.683196 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78dcfcdad9df1091a83bae6d14c1793804e2ccdb7620f547ce4c991a7ff6a6e1\": container with ID starting with 78dcfcdad9df1091a83bae6d14c1793804e2ccdb7620f547ce4c991a7ff6a6e1 not found: ID does not exist" containerID="78dcfcdad9df1091a83bae6d14c1793804e2ccdb7620f547ce4c991a7ff6a6e1" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.683225 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78dcfcdad9df1091a83bae6d14c1793804e2ccdb7620f547ce4c991a7ff6a6e1"} err="failed to get container status \"78dcfcdad9df1091a83bae6d14c1793804e2ccdb7620f547ce4c991a7ff6a6e1\": rpc error: code = NotFound desc = could not find container \"78dcfcdad9df1091a83bae6d14c1793804e2ccdb7620f547ce4c991a7ff6a6e1\": container with ID starting with 78dcfcdad9df1091a83bae6d14c1793804e2ccdb7620f547ce4c991a7ff6a6e1 not found: ID does not exist" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.683241 4796 scope.go:117] "RemoveContainer" containerID="aaf021f18c52d90910196503289ae8f63bc4159a498f8dfe2a38980def3b29dd" Dec 02 20:27:33 crc kubenswrapper[4796]: E1202 20:27:33.683700 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaf021f18c52d90910196503289ae8f63bc4159a498f8dfe2a38980def3b29dd\": container with ID starting with aaf021f18c52d90910196503289ae8f63bc4159a498f8dfe2a38980def3b29dd not found: ID does not exist" containerID="aaf021f18c52d90910196503289ae8f63bc4159a498f8dfe2a38980def3b29dd" Dec 02 20:27:33 crc kubenswrapper[4796]: I1202 20:27:33.683742 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf021f18c52d90910196503289ae8f63bc4159a498f8dfe2a38980def3b29dd"} err="failed to get container status \"aaf021f18c52d90910196503289ae8f63bc4159a498f8dfe2a38980def3b29dd\": rpc error: code = NotFound desc = could not find container \"aaf021f18c52d90910196503289ae8f63bc4159a498f8dfe2a38980def3b29dd\": container with ID starting with aaf021f18c52d90910196503289ae8f63bc4159a498f8dfe2a38980def3b29dd not found: ID does not exist" Dec 02 20:27:35 crc kubenswrapper[4796]: I1202 20:27:35.273427 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc8abd34-87ad-4eea-adeb-4085a819eeb0" path="/var/lib/kubelet/pods/dc8abd34-87ad-4eea-adeb-4085a819eeb0/volumes" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.458416 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hzcbh"] Dec 02 20:27:38 crc kubenswrapper[4796]: E1202 20:27:38.459181 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8abd34-87ad-4eea-adeb-4085a819eeb0" containerName="registry-server" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.459197 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8abd34-87ad-4eea-adeb-4085a819eeb0" containerName="registry-server" Dec 02 20:27:38 crc kubenswrapper[4796]: E1202 20:27:38.459218 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8abd34-87ad-4eea-adeb-4085a819eeb0" containerName="extract-utilities" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.459226 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8abd34-87ad-4eea-adeb-4085a819eeb0" containerName="extract-utilities" Dec 02 20:27:38 crc kubenswrapper[4796]: E1202 20:27:38.459232 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8abd34-87ad-4eea-adeb-4085a819eeb0" containerName="extract-content" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.459238 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8abd34-87ad-4eea-adeb-4085a819eeb0" containerName="extract-content" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.459385 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc8abd34-87ad-4eea-adeb-4085a819eeb0" containerName="registry-server" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.460523 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.483979 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzcbh"] Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.529014 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-catalog-content\") pod \"certified-operators-hzcbh\" (UID: \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\") " pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.529089 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-utilities\") pod \"certified-operators-hzcbh\" (UID: \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\") " pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.529138 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn725\" (UniqueName: \"kubernetes.io/projected/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-kube-api-access-dn725\") pod \"certified-operators-hzcbh\" (UID: \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\") " pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.630333 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-catalog-content\") pod \"certified-operators-hzcbh\" (UID: \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\") " pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.630407 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-utilities\") pod \"certified-operators-hzcbh\" (UID: \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\") " pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.630454 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn725\" (UniqueName: \"kubernetes.io/projected/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-kube-api-access-dn725\") pod \"certified-operators-hzcbh\" (UID: \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\") " pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.630958 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-catalog-content\") pod \"certified-operators-hzcbh\" (UID: \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\") " pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.631118 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-utilities\") pod \"certified-operators-hzcbh\" (UID: \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\") " pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.656510 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn725\" (UniqueName: \"kubernetes.io/projected/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-kube-api-access-dn725\") pod \"certified-operators-hzcbh\" (UID: \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\") " pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:38 crc kubenswrapper[4796]: I1202 20:27:38.793712 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:39 crc kubenswrapper[4796]: I1202 20:27:39.079556 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzcbh"] Dec 02 20:27:39 crc kubenswrapper[4796]: I1202 20:27:39.647443 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzcbh" event={"ID":"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef","Type":"ContainerStarted","Data":"27c6bf558d22deed9966971e86146376a03e328429eae43e31d40bdf62b56e0d"} Dec 02 20:27:40 crc kubenswrapper[4796]: I1202 20:27:40.658489 4796 generic.go:334] "Generic (PLEG): container finished" podID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" containerID="0f6fcef609370d010706753cdd643bee88a5345f684e4d9012f064f163ffc949" exitCode=0 Dec 02 20:27:40 crc kubenswrapper[4796]: I1202 20:27:40.659562 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzcbh" event={"ID":"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef","Type":"ContainerDied","Data":"0f6fcef609370d010706753cdd643bee88a5345f684e4d9012f064f163ffc949"} Dec 02 20:27:40 crc kubenswrapper[4796]: I1202 20:27:40.661919 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 20:27:41 crc kubenswrapper[4796]: I1202 20:27:41.668976 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzcbh" event={"ID":"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef","Type":"ContainerStarted","Data":"2cf9fbd2aa773dd6b1beb8724b0266415b35435f7d630a0eaf7688c0ba9eb74b"} Dec 02 20:27:42 crc kubenswrapper[4796]: I1202 20:27:42.677347 4796 generic.go:334] "Generic (PLEG): container finished" podID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" containerID="2cf9fbd2aa773dd6b1beb8724b0266415b35435f7d630a0eaf7688c0ba9eb74b" exitCode=0 Dec 02 20:27:42 crc kubenswrapper[4796]: I1202 20:27:42.677399 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzcbh" event={"ID":"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef","Type":"ContainerDied","Data":"2cf9fbd2aa773dd6b1beb8724b0266415b35435f7d630a0eaf7688c0ba9eb74b"} Dec 02 20:27:43 crc kubenswrapper[4796]: I1202 20:27:43.685734 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzcbh" event={"ID":"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef","Type":"ContainerStarted","Data":"cda13d3fdfd6a3ebf876dc3c98331dfa3e9ce84f68e9ef6762bb956f0b2ad978"} Dec 02 20:27:43 crc kubenswrapper[4796]: I1202 20:27:43.705670 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hzcbh" podStartSLOduration=3.250528074 podStartE2EDuration="5.705644917s" podCreationTimestamp="2025-12-02 20:27:38 +0000 UTC" firstStartedPulling="2025-12-02 20:27:40.66169658 +0000 UTC m=+943.665072114" lastFinishedPulling="2025-12-02 20:27:43.116813423 +0000 UTC m=+946.120188957" observedRunningTime="2025-12-02 20:27:43.704794138 +0000 UTC m=+946.708169672" watchObservedRunningTime="2025-12-02 20:27:43.705644917 +0000 UTC m=+946.709020471" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.723465 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.725347 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.730383 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-sxkm8" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.732227 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.734773 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.743550 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-s5dtd" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.745370 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.774894 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.776587 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.777493 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqvl2\" (UniqueName: \"kubernetes.io/projected/c4bff453-90ae-481c-8027-5eca98e48917-kube-api-access-rqvl2\") pod \"cinder-operator-controller-manager-859b6ccc6-jwbj8\" (UID: \"c4bff453-90ae-481c-8027-5eca98e48917\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.777538 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knmdz\" (UniqueName: \"kubernetes.io/projected/ac795881-0ae6-43cf-9a1d-119408238bf6-kube-api-access-knmdz\") pod \"designate-operator-controller-manager-78b4bc895b-grxnn\" (UID: \"ac795881-0ae6-43cf-9a1d-119408238bf6\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.777590 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqz8w\" (UniqueName: \"kubernetes.io/projected/8161682e-0c53-41ab-bef7-99766302c3eb-kube-api-access-pqz8w\") pod \"barbican-operator-controller-manager-7d9dfd778-cshfw\" (UID: \"8161682e-0c53-41ab-bef7-99766302c3eb\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.779675 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.780171 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-j8kqc" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.785212 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.792150 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.809175 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.809981 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.810202 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.819897 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.820215 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-jj6pl" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.820488 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-z4fhw" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.821611 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.828316 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jk4xr" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.837618 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.858680 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.873272 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.882465 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqvl2\" (UniqueName: \"kubernetes.io/projected/c4bff453-90ae-481c-8027-5eca98e48917-kube-api-access-rqvl2\") pod \"cinder-operator-controller-manager-859b6ccc6-jwbj8\" (UID: \"c4bff453-90ae-481c-8027-5eca98e48917\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.882719 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knmdz\" (UniqueName: \"kubernetes.io/projected/ac795881-0ae6-43cf-9a1d-119408238bf6-kube-api-access-knmdz\") pod \"designate-operator-controller-manager-78b4bc895b-grxnn\" (UID: \"ac795881-0ae6-43cf-9a1d-119408238bf6\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.882830 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbxt4\" (UniqueName: \"kubernetes.io/projected/8cff73ad-f4f3-47a7-8ba1-985614f757a3-kube-api-access-nbxt4\") pod \"heat-operator-controller-manager-5f64f6f8bb-shlwt\" (UID: \"8cff73ad-f4f3-47a7-8ba1-985614f757a3\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.882917 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqz8w\" (UniqueName: \"kubernetes.io/projected/8161682e-0c53-41ab-bef7-99766302c3eb-kube-api-access-pqz8w\") pod \"barbican-operator-controller-manager-7d9dfd778-cshfw\" (UID: \"8161682e-0c53-41ab-bef7-99766302c3eb\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.883005 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j58zr\" (UniqueName: \"kubernetes.io/projected/9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77-kube-api-access-j58zr\") pod \"glance-operator-controller-manager-77987cd8cd-tsrx8\" (UID: \"9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.883120 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwgq7\" (UniqueName: \"kubernetes.io/projected/b751e4e4-8d34-4fcc-baca-1e0eea85f1b9-kube-api-access-kwgq7\") pod \"horizon-operator-controller-manager-68c6d99b8f-8fc8h\" (UID: \"b751e4e4-8d34-4fcc-baca-1e0eea85f1b9\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.896327 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-n5m58"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.897518 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.902524 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.903229 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.903521 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-n88zw" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.903585 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.917755 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-n5m58"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.921960 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-r4qj9" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.922131 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqvl2\" (UniqueName: \"kubernetes.io/projected/c4bff453-90ae-481c-8027-5eca98e48917-kube-api-access-rqvl2\") pod \"cinder-operator-controller-manager-859b6ccc6-jwbj8\" (UID: \"c4bff453-90ae-481c-8027-5eca98e48917\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.927048 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.933018 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knmdz\" (UniqueName: \"kubernetes.io/projected/ac795881-0ae6-43cf-9a1d-119408238bf6-kube-api-access-knmdz\") pod \"designate-operator-controller-manager-78b4bc895b-grxnn\" (UID: \"ac795881-0ae6-43cf-9a1d-119408238bf6\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.937594 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqz8w\" (UniqueName: \"kubernetes.io/projected/8161682e-0c53-41ab-bef7-99766302c3eb-kube-api-access-pqz8w\") pod \"barbican-operator-controller-manager-7d9dfd778-cshfw\" (UID: \"8161682e-0c53-41ab-bef7-99766302c3eb\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.941420 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.942535 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.947754 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-llhcf" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.971165 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.978136 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct"] Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.979567 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct" Dec 02 20:27:46 crc kubenswrapper[4796]: I1202 20:27:46.986228 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9b42z" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.002970 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j58zr\" (UniqueName: \"kubernetes.io/projected/9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77-kube-api-access-j58zr\") pod \"glance-operator-controller-manager-77987cd8cd-tsrx8\" (UID: \"9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.003067 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwgq7\" (UniqueName: \"kubernetes.io/projected/b751e4e4-8d34-4fcc-baca-1e0eea85f1b9-kube-api-access-kwgq7\") pod \"horizon-operator-controller-manager-68c6d99b8f-8fc8h\" (UID: \"b751e4e4-8d34-4fcc-baca-1e0eea85f1b9\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.003949 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert\") pod \"infra-operator-controller-manager-57548d458d-n5m58\" (UID: \"98ce5936-d2d0-480f-bbd6-79e074aa862c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.004062 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2dm\" (UniqueName: \"kubernetes.io/projected/98ce5936-d2d0-480f-bbd6-79e074aa862c-kube-api-access-gq2dm\") pod \"infra-operator-controller-manager-57548d458d-n5m58\" (UID: \"98ce5936-d2d0-480f-bbd6-79e074aa862c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.004226 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnc4z\" (UniqueName: \"kubernetes.io/projected/1180fd08-546d-431c-9583-10fef2f94b1f-kube-api-access-xnc4z\") pod \"keystone-operator-controller-manager-7765d96ddf-qccgk\" (UID: \"1180fd08-546d-431c-9583-10fef2f94b1f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.004337 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbxt4\" (UniqueName: \"kubernetes.io/projected/8cff73ad-f4f3-47a7-8ba1-985614f757a3-kube-api-access-nbxt4\") pod \"heat-operator-controller-manager-5f64f6f8bb-shlwt\" (UID: \"8cff73ad-f4f3-47a7-8ba1-985614f757a3\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.020867 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.022290 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.035149 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-l5gpx" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.044336 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.045665 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.054212 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.055575 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j58zr\" (UniqueName: \"kubernetes.io/projected/9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77-kube-api-access-j58zr\") pod \"glance-operator-controller-manager-77987cd8cd-tsrx8\" (UID: \"9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.060475 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-z25mk" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.072858 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.073353 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.085748 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbxt4\" (UniqueName: \"kubernetes.io/projected/8cff73ad-f4f3-47a7-8ba1-985614f757a3-kube-api-access-nbxt4\") pod \"heat-operator-controller-manager-5f64f6f8bb-shlwt\" (UID: \"8cff73ad-f4f3-47a7-8ba1-985614f757a3\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.110626 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwgq7\" (UniqueName: \"kubernetes.io/projected/b751e4e4-8d34-4fcc-baca-1e0eea85f1b9-kube-api-access-kwgq7\") pod \"horizon-operator-controller-manager-68c6d99b8f-8fc8h\" (UID: \"b751e4e4-8d34-4fcc-baca-1e0eea85f1b9\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.123987 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.146079 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert\") pod \"infra-operator-controller-manager-57548d458d-n5m58\" (UID: \"98ce5936-d2d0-480f-bbd6-79e074aa862c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.146160 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2dm\" (UniqueName: \"kubernetes.io/projected/98ce5936-d2d0-480f-bbd6-79e074aa862c-kube-api-access-gq2dm\") pod \"infra-operator-controller-manager-57548d458d-n5m58\" (UID: \"98ce5936-d2d0-480f-bbd6-79e074aa862c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.146233 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfz49\" (UniqueName: \"kubernetes.io/projected/741037fd-f9c3-4461-8f0c-d94f1f869ec0-kube-api-access-kfz49\") pod \"ironic-operator-controller-manager-6c548fd776-bv8px\" (UID: \"741037fd-f9c3-4461-8f0c-d94f1f869ec0\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.146444 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnc4z\" (UniqueName: \"kubernetes.io/projected/1180fd08-546d-431c-9583-10fef2f94b1f-kube-api-access-xnc4z\") pod \"keystone-operator-controller-manager-7765d96ddf-qccgk\" (UID: \"1180fd08-546d-431c-9583-10fef2f94b1f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.146597 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mqf4\" (UniqueName: \"kubernetes.io/projected/8fa1598d-65f1-4841-8609-2c07e7dc8ffd-kube-api-access-8mqf4\") pod \"mariadb-operator-controller-manager-56bbcc9d85-4qgct\" (UID: \"8fa1598d-65f1-4841-8609-2c07e7dc8ffd\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.147029 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt" Dec 02 20:27:47 crc kubenswrapper[4796]: E1202 20:27:47.160080 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 20:27:47 crc kubenswrapper[4796]: E1202 20:27:47.160179 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert podName:98ce5936-d2d0-480f-bbd6-79e074aa862c nodeName:}" failed. No retries permitted until 2025-12-02 20:27:47.66015086 +0000 UTC m=+950.663526394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert") pod "infra-operator-controller-manager-57548d458d-n5m58" (UID: "98ce5936-d2d0-480f-bbd6-79e074aa862c") : secret "infra-operator-webhook-server-cert" not found Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.161077 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.161215 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.162556 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.248352 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.249099 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2dm\" (UniqueName: \"kubernetes.io/projected/98ce5936-d2d0-480f-bbd6-79e074aa862c-kube-api-access-gq2dm\") pod \"infra-operator-controller-manager-57548d458d-n5m58\" (UID: \"98ce5936-d2d0-480f-bbd6-79e074aa862c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.249359 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnc4z\" (UniqueName: \"kubernetes.io/projected/1180fd08-546d-431c-9583-10fef2f94b1f-kube-api-access-xnc4z\") pod \"keystone-operator-controller-manager-7765d96ddf-qccgk\" (UID: \"1180fd08-546d-431c-9583-10fef2f94b1f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.265807 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfz49\" (UniqueName: \"kubernetes.io/projected/741037fd-f9c3-4461-8f0c-d94f1f869ec0-kube-api-access-kfz49\") pod \"ironic-operator-controller-manager-6c548fd776-bv8px\" (UID: \"741037fd-f9c3-4461-8f0c-d94f1f869ec0\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.265913 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9h5\" (UniqueName: \"kubernetes.io/projected/87e62b79-fb94-4209-88b2-c2b6b0966181-kube-api-access-bj9h5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-s988m\" (UID: \"87e62b79-fb94-4209-88b2-c2b6b0966181\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.265962 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mqf4\" (UniqueName: \"kubernetes.io/projected/8fa1598d-65f1-4841-8609-2c07e7dc8ffd-kube-api-access-8mqf4\") pod \"mariadb-operator-controller-manager-56bbcc9d85-4qgct\" (UID: \"8fa1598d-65f1-4841-8609-2c07e7dc8ffd\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.265987 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8wmb\" (UniqueName: \"kubernetes.io/projected/84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6-kube-api-access-x8wmb\") pod \"manila-operator-controller-manager-7c79b5df47-cnfjq\" (UID: \"84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.289171 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.322871 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mqf4\" (UniqueName: \"kubernetes.io/projected/8fa1598d-65f1-4841-8609-2c07e7dc8ffd-kube-api-access-8mqf4\") pod \"mariadb-operator-controller-manager-56bbcc9d85-4qgct\" (UID: \"8fa1598d-65f1-4841-8609-2c07e7dc8ffd\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.329924 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfz49\" (UniqueName: \"kubernetes.io/projected/741037fd-f9c3-4461-8f0c-d94f1f869ec0-kube-api-access-kfz49\") pod \"ironic-operator-controller-manager-6c548fd776-bv8px\" (UID: \"741037fd-f9c3-4461-8f0c-d94f1f869ec0\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.366971 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8wmb\" (UniqueName: \"kubernetes.io/projected/84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6-kube-api-access-x8wmb\") pod \"manila-operator-controller-manager-7c79b5df47-cnfjq\" (UID: \"84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.367234 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9h5\" (UniqueName: \"kubernetes.io/projected/87e62b79-fb94-4209-88b2-c2b6b0966181-kube-api-access-bj9h5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-s988m\" (UID: \"87e62b79-fb94-4209-88b2-c2b6b0966181\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.416563 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9h5\" (UniqueName: \"kubernetes.io/projected/87e62b79-fb94-4209-88b2-c2b6b0966181-kube-api-access-bj9h5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-s988m\" (UID: \"87e62b79-fb94-4209-88b2-c2b6b0966181\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.469051 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8wmb\" (UniqueName: \"kubernetes.io/projected/84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6-kube-api-access-x8wmb\") pod \"manila-operator-controller-manager-7c79b5df47-cnfjq\" (UID: \"84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.477614 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.478674 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.479435 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.480157 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.480174 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.480183 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.483430 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.484309 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.487118 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.488602 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.488658 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.488674 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.488703 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.489679 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-c2rnz" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.490213 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-f6fxw" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.490368 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.499163 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.499194 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.500199 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.500382 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.502735 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7gt8q" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.502933 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4m7wc" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.511686 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-582wr" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.511940 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-cjhxw" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.521178 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.531180 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.544151 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.545386 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.556873 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jvrls" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.580414 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.582959 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.584297 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.598435 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-ck8zl" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.604552 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.613523 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.614717 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.620517 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7b4x2" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.649701 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.654317 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.673277 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd7vf\" (UniqueName: \"kubernetes.io/projected/a5e8b895-e788-44f4-8481-520f1cbd75c0-kube-api-access-pd7vf\") pod \"swift-operator-controller-manager-5f8c65bbfc-8tgcn\" (UID: \"a5e8b895-e788-44f4-8481-520f1cbd75c0\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.673329 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brcbx\" (UniqueName: \"kubernetes.io/projected/0b0e9209-3a80-4f10-9b56-4d3d28d0dee2-kube-api-access-brcbx\") pod \"telemetry-operator-controller-manager-76cc84c6bb-lgj8f\" (UID: \"0b0e9209-3a80-4f10-9b56-4d3d28d0dee2\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.673370 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zzmn\" (UniqueName: \"kubernetes.io/projected/4f7d1bce-8d2f-4f79-9b65-067b14abc6ca-kube-api-access-6zzmn\") pod \"ovn-operator-controller-manager-b6456fdb6-5zl48\" (UID: \"4f7d1bce-8d2f-4f79-9b65-067b14abc6ca\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.673398 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert\") pod \"infra-operator-controller-manager-57548d458d-n5m58\" (UID: \"98ce5936-d2d0-480f-bbd6-79e074aa862c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:27:47 crc kubenswrapper[4796]: E1202 20:27:47.673518 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 20:27:47 crc kubenswrapper[4796]: E1202 20:27:47.673587 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert podName:98ce5936-d2d0-480f-bbd6-79e074aa862c nodeName:}" failed. No retries permitted until 2025-12-02 20:27:48.673566972 +0000 UTC m=+951.676942576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert") pod "infra-operator-controller-manager-57548d458d-n5m58" (UID: "98ce5936-d2d0-480f-bbd6-79e074aa862c") : secret "infra-operator-webhook-server-cert" not found Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.673668 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gc8x\" (UniqueName: \"kubernetes.io/projected/cefeb832-8af2-4be7-a143-a5ee5e28a091-kube-api-access-8gc8x\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9\" (UID: \"cefeb832-8af2-4be7-a143-a5ee5e28a091\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.673691 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9\" (UID: \"cefeb832-8af2-4be7-a143-a5ee5e28a091\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.673715 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck9mw\" (UniqueName: \"kubernetes.io/projected/07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7-kube-api-access-ck9mw\") pod \"placement-operator-controller-manager-78f8948974-6wk6f\" (UID: \"07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.673732 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmctc\" (UniqueName: \"kubernetes.io/projected/936371ab-213c-4f66-92ce-c9f5a79e3aa6-kube-api-access-bmctc\") pod \"nova-operator-controller-manager-697bc559fc-t8wwl\" (UID: \"936371ab-213c-4f66-92ce-c9f5a79e3aa6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.674135 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmkzb\" (UniqueName: \"kubernetes.io/projected/8f4b078d-0d59-4f07-97d7-0537e71a7770-kube-api-access-tmkzb\") pod \"octavia-operator-controller-manager-998648c74-tmdq9\" (UID: \"8f4b078d-0d59-4f07-97d7-0537e71a7770\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.678755 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.727962 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.778366 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.779311 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.786987 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xhcd\" (UniqueName: \"kubernetes.io/projected/7b338d05-8a86-49bf-b996-71e686d384b2-kube-api-access-6xhcd\") pod \"test-operator-controller-manager-5854674fcc-j9v7j\" (UID: \"7b338d05-8a86-49bf-b996-71e686d384b2\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.787034 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.787059 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmkzb\" (UniqueName: \"kubernetes.io/projected/8f4b078d-0d59-4f07-97d7-0537e71a7770-kube-api-access-tmkzb\") pod \"octavia-operator-controller-manager-998648c74-tmdq9\" (UID: \"8f4b078d-0d59-4f07-97d7-0537e71a7770\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.787088 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd7vf\" (UniqueName: \"kubernetes.io/projected/a5e8b895-e788-44f4-8481-520f1cbd75c0-kube-api-access-pd7vf\") pod \"swift-operator-controller-manager-5f8c65bbfc-8tgcn\" (UID: \"a5e8b895-e788-44f4-8481-520f1cbd75c0\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.787114 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brcbx\" (UniqueName: \"kubernetes.io/projected/0b0e9209-3a80-4f10-9b56-4d3d28d0dee2-kube-api-access-brcbx\") pod \"telemetry-operator-controller-manager-76cc84c6bb-lgj8f\" (UID: \"0b0e9209-3a80-4f10-9b56-4d3d28d0dee2\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.787133 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw4g2\" (UniqueName: \"kubernetes.io/projected/718dd3d4-158b-4dcd-912e-51e3cfa993e7-kube-api-access-xw4g2\") pod \"watcher-operator-controller-manager-76f4f8cb8b-nfzhk\" (UID: \"718dd3d4-158b-4dcd-912e-51e3cfa993e7\") " pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.787152 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zzmn\" (UniqueName: \"kubernetes.io/projected/4f7d1bce-8d2f-4f79-9b65-067b14abc6ca-kube-api-access-6zzmn\") pod \"ovn-operator-controller-manager-b6456fdb6-5zl48\" (UID: \"4f7d1bce-8d2f-4f79-9b65-067b14abc6ca\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.787192 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gc8x\" (UniqueName: \"kubernetes.io/projected/cefeb832-8af2-4be7-a143-a5ee5e28a091-kube-api-access-8gc8x\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9\" (UID: \"cefeb832-8af2-4be7-a143-a5ee5e28a091\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.787211 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9\" (UID: \"cefeb832-8af2-4be7-a143-a5ee5e28a091\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.787230 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck9mw\" (UniqueName: \"kubernetes.io/projected/07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7-kube-api-access-ck9mw\") pod \"placement-operator-controller-manager-78f8948974-6wk6f\" (UID: \"07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.787262 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmctc\" (UniqueName: \"kubernetes.io/projected/936371ab-213c-4f66-92ce-c9f5a79e3aa6-kube-api-access-bmctc\") pod \"nova-operator-controller-manager-697bc559fc-t8wwl\" (UID: \"936371ab-213c-4f66-92ce-c9f5a79e3aa6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.787292 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tznh9\" (UniqueName: \"kubernetes.io/projected/3117ee94-1d46-46b4-b567-508d22bc6bac-kube-api-access-tznh9\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.787307 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:47 crc kubenswrapper[4796]: E1202 20:27:47.787436 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 20:27:47 crc kubenswrapper[4796]: E1202 20:27:47.787488 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert podName:cefeb832-8af2-4be7-a143-a5ee5e28a091 nodeName:}" failed. No retries permitted until 2025-12-02 20:27:48.287468146 +0000 UTC m=+951.290843680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" (UID: "cefeb832-8af2-4be7-a143-a5ee5e28a091") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.788492 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jp5hs" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.788655 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.800869 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.861297 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brcbx\" (UniqueName: \"kubernetes.io/projected/0b0e9209-3a80-4f10-9b56-4d3d28d0dee2-kube-api-access-brcbx\") pod \"telemetry-operator-controller-manager-76cc84c6bb-lgj8f\" (UID: \"0b0e9209-3a80-4f10-9b56-4d3d28d0dee2\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.861330 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmkzb\" (UniqueName: \"kubernetes.io/projected/8f4b078d-0d59-4f07-97d7-0537e71a7770-kube-api-access-tmkzb\") pod \"octavia-operator-controller-manager-998648c74-tmdq9\" (UID: \"8f4b078d-0d59-4f07-97d7-0537e71a7770\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.864147 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd7vf\" (UniqueName: \"kubernetes.io/projected/a5e8b895-e788-44f4-8481-520f1cbd75c0-kube-api-access-pd7vf\") pod \"swift-operator-controller-manager-5f8c65bbfc-8tgcn\" (UID: \"a5e8b895-e788-44f4-8481-520f1cbd75c0\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.864405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gc8x\" (UniqueName: \"kubernetes.io/projected/cefeb832-8af2-4be7-a143-a5ee5e28a091-kube-api-access-8gc8x\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9\" (UID: \"cefeb832-8af2-4be7-a143-a5ee5e28a091\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.883798 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmctc\" (UniqueName: \"kubernetes.io/projected/936371ab-213c-4f66-92ce-c9f5a79e3aa6-kube-api-access-bmctc\") pod \"nova-operator-controller-manager-697bc559fc-t8wwl\" (UID: \"936371ab-213c-4f66-92ce-c9f5a79e3aa6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.887054 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zzmn\" (UniqueName: \"kubernetes.io/projected/4f7d1bce-8d2f-4f79-9b65-067b14abc6ca-kube-api-access-6zzmn\") pod \"ovn-operator-controller-manager-b6456fdb6-5zl48\" (UID: \"4f7d1bce-8d2f-4f79-9b65-067b14abc6ca\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.904726 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw4g2\" (UniqueName: \"kubernetes.io/projected/718dd3d4-158b-4dcd-912e-51e3cfa993e7-kube-api-access-xw4g2\") pod \"watcher-operator-controller-manager-76f4f8cb8b-nfzhk\" (UID: \"718dd3d4-158b-4dcd-912e-51e3cfa993e7\") " pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.911874 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.911940 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tznh9\" (UniqueName: \"kubernetes.io/projected/3117ee94-1d46-46b4-b567-508d22bc6bac-kube-api-access-tznh9\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.912023 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xhcd\" (UniqueName: \"kubernetes.io/projected/7b338d05-8a86-49bf-b996-71e686d384b2-kube-api-access-6xhcd\") pod \"test-operator-controller-manager-5854674fcc-j9v7j\" (UID: \"7b338d05-8a86-49bf-b996-71e686d384b2\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.912059 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:47 crc kubenswrapper[4796]: E1202 20:27:47.912431 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 20:27:47 crc kubenswrapper[4796]: E1202 20:27:47.912509 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs podName:3117ee94-1d46-46b4-b567-508d22bc6bac nodeName:}" failed. No retries permitted until 2025-12-02 20:27:48.412480869 +0000 UTC m=+951.415856403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs") pod "openstack-operator-controller-manager-754dcb5d59-v9gsq" (UID: "3117ee94-1d46-46b4-b567-508d22bc6bac") : secret "metrics-server-cert" not found Dec 02 20:27:47 crc kubenswrapper[4796]: E1202 20:27:47.913557 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 20:27:47 crc kubenswrapper[4796]: E1202 20:27:47.913621 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs podName:3117ee94-1d46-46b4-b567-508d22bc6bac nodeName:}" failed. No retries permitted until 2025-12-02 20:27:48.413609636 +0000 UTC m=+951.416985170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs") pod "openstack-operator-controller-manager-754dcb5d59-v9gsq" (UID: "3117ee94-1d46-46b4-b567-508d22bc6bac") : secret "webhook-server-cert" not found Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.921746 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.922735 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck9mw\" (UniqueName: \"kubernetes.io/projected/07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7-kube-api-access-ck9mw\") pod \"placement-operator-controller-manager-78f8948974-6wk6f\" (UID: \"07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.927967 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.955769 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.957718 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xhcd\" (UniqueName: \"kubernetes.io/projected/7b338d05-8a86-49bf-b996-71e686d384b2-kube-api-access-6xhcd\") pod \"test-operator-controller-manager-5854674fcc-j9v7j\" (UID: \"7b338d05-8a86-49bf-b996-71e686d384b2\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.976897 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tznh9\" (UniqueName: \"kubernetes.io/projected/3117ee94-1d46-46b4-b567-508d22bc6bac-kube-api-access-tznh9\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.981265 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw4g2\" (UniqueName: \"kubernetes.io/projected/718dd3d4-158b-4dcd-912e-51e3cfa993e7-kube-api-access-xw4g2\") pod \"watcher-operator-controller-manager-76f4f8cb8b-nfzhk\" (UID: \"718dd3d4-158b-4dcd-912e-51e3cfa993e7\") " pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.981377 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww"] Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.988628 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.989492 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.990392 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" Dec 02 20:27:47 crc kubenswrapper[4796]: I1202 20:27:47.993169 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-b2msl" Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.000697 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww"] Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.020826 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.030242 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j" Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.059604 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.130420 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgwx4\" (UniqueName: \"kubernetes.io/projected/0f460523-61e4-4cb6-9642-908cfa76579d-kube-api-access-sgwx4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-27kww\" (UID: \"0f460523-61e4-4cb6-9642-908cfa76579d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww" Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.162791 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.234763 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgwx4\" (UniqueName: \"kubernetes.io/projected/0f460523-61e4-4cb6-9642-908cfa76579d-kube-api-access-sgwx4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-27kww\" (UID: \"0f460523-61e4-4cb6-9642-908cfa76579d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww" Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.254306 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgwx4\" (UniqueName: \"kubernetes.io/projected/0f460523-61e4-4cb6-9642-908cfa76579d-kube-api-access-sgwx4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-27kww\" (UID: \"0f460523-61e4-4cb6-9642-908cfa76579d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww" Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.306357 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt"] Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.336080 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9\" (UID: \"cefeb832-8af2-4be7-a143-a5ee5e28a091\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:27:48 crc kubenswrapper[4796]: E1202 20:27:48.336246 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 20:27:48 crc kubenswrapper[4796]: E1202 20:27:48.336541 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert podName:cefeb832-8af2-4be7-a143-a5ee5e28a091 nodeName:}" failed. No retries permitted until 2025-12-02 20:27:49.33652182 +0000 UTC m=+952.339897354 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" (UID: "cefeb832-8af2-4be7-a143-a5ee5e28a091") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.404722 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww" Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.438059 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.438236 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:48 crc kubenswrapper[4796]: E1202 20:27:48.438195 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 20:27:48 crc kubenswrapper[4796]: E1202 20:27:48.438432 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 20:27:48 crc kubenswrapper[4796]: E1202 20:27:48.438519 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs podName:3117ee94-1d46-46b4-b567-508d22bc6bac nodeName:}" failed. No retries permitted until 2025-12-02 20:27:49.438465454 +0000 UTC m=+952.441841178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs") pod "openstack-operator-controller-manager-754dcb5d59-v9gsq" (UID: "3117ee94-1d46-46b4-b567-508d22bc6bac") : secret "webhook-server-cert" not found Dec 02 20:27:48 crc kubenswrapper[4796]: E1202 20:27:48.438554 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs podName:3117ee94-1d46-46b4-b567-508d22bc6bac nodeName:}" failed. No retries permitted until 2025-12-02 20:27:49.438542485 +0000 UTC m=+952.441918289 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs") pod "openstack-operator-controller-manager-754dcb5d59-v9gsq" (UID: "3117ee94-1d46-46b4-b567-508d22bc6bac") : secret "metrics-server-cert" not found Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.461120 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8"] Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.482537 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw"] Dec 02 20:27:48 crc kubenswrapper[4796]: W1202 20:27:48.510283 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4bff453_90ae_481c_8027_5eca98e48917.slice/crio-73530c83c553a5f7b19e0bbc130beaca13df66613176a98c3fd8e67eda2da44d WatchSource:0}: Error finding container 73530c83c553a5f7b19e0bbc130beaca13df66613176a98c3fd8e67eda2da44d: Status 404 returned error can't find the container with id 73530c83c553a5f7b19e0bbc130beaca13df66613176a98c3fd8e67eda2da44d Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.601803 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn"] Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.622904 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8"] Dec 02 20:27:48 crc kubenswrapper[4796]: W1202 20:27:48.628048 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f64c0f7_3638_4f4c_bf3e_8ab0da4f2f77.slice/crio-9e0465ca505e9cd537a99fa1d5dd18950cb47b6fd8887798398f09f9aec41e00 WatchSource:0}: Error finding container 9e0465ca505e9cd537a99fa1d5dd18950cb47b6fd8887798398f09f9aec41e00: Status 404 returned error can't find the container with id 9e0465ca505e9cd537a99fa1d5dd18950cb47b6fd8887798398f09f9aec41e00 Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.632688 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h"] Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.742203 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert\") pod \"infra-operator-controller-manager-57548d458d-n5m58\" (UID: \"98ce5936-d2d0-480f-bbd6-79e074aa862c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:27:48 crc kubenswrapper[4796]: E1202 20:27:48.742445 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 20:27:48 crc kubenswrapper[4796]: E1202 20:27:48.742528 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert podName:98ce5936-d2d0-480f-bbd6-79e074aa862c nodeName:}" failed. No retries permitted until 2025-12-02 20:27:50.742508755 +0000 UTC m=+953.745884279 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert") pod "infra-operator-controller-manager-57548d458d-n5m58" (UID: "98ce5936-d2d0-480f-bbd6-79e074aa862c") : secret "infra-operator-webhook-server-cert" not found Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.779332 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn" event={"ID":"ac795881-0ae6-43cf-9a1d-119408238bf6","Type":"ContainerStarted","Data":"8eacc4c05c515429146a23d14784d3c83941823369124c368bccd1e30ce3475a"} Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.782078 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt" event={"ID":"8cff73ad-f4f3-47a7-8ba1-985614f757a3","Type":"ContainerStarted","Data":"9c223e8e967fdb8c75c905461cd901635d22c7cf56a4f74850ba86b559bf1774"} Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.785123 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8" event={"ID":"c4bff453-90ae-481c-8027-5eca98e48917","Type":"ContainerStarted","Data":"73530c83c553a5f7b19e0bbc130beaca13df66613176a98c3fd8e67eda2da44d"} Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.788674 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw" event={"ID":"8161682e-0c53-41ab-bef7-99766302c3eb","Type":"ContainerStarted","Data":"21f71fc22397bb6fba150e65658200e6b0f3c7151b4adaaba8c94f3728b25df8"} Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.788795 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px"] Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.789923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h" event={"ID":"b751e4e4-8d34-4fcc-baca-1e0eea85f1b9","Type":"ContainerStarted","Data":"6d3bad6b2219284adcbfbbfbfe7a0085835adf521dfe9fedde94aa54d60af055"} Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.793451 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8" event={"ID":"9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77","Type":"ContainerStarted","Data":"9e0465ca505e9cd537a99fa1d5dd18950cb47b6fd8887798398f09f9aec41e00"} Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.794791 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.798460 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:48 crc kubenswrapper[4796]: W1202 20:27:48.800548 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod741037fd_f9c3_4461_8f0c_d94f1f869ec0.slice/crio-6448788c495d3812a1f7af25fea42dd29701c637f1b40dc1cc3cfa87bf1d26a5 WatchSource:0}: Error finding container 6448788c495d3812a1f7af25fea42dd29701c637f1b40dc1cc3cfa87bf1d26a5: Status 404 returned error can't find the container with id 6448788c495d3812a1f7af25fea42dd29701c637f1b40dc1cc3cfa87bf1d26a5 Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.808777 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk"] Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.840826 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m"] Dec 02 20:27:48 crc kubenswrapper[4796]: I1202 20:27:48.890797 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.018410 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct"] Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.028696 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48"] Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.053700 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl"] Dec 02 20:27:49 crc kubenswrapper[4796]: W1202 20:27:49.067860 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod936371ab_213c_4f66_92ce_c9f5a79e3aa6.slice/crio-5d85ce8000717a405d42f02cdfc1b97979aa510562f01cc50368ec991898b97d WatchSource:0}: Error finding container 5d85ce8000717a405d42f02cdfc1b97979aa510562f01cc50368ec991898b97d: Status 404 returned error can't find the container with id 5d85ce8000717a405d42f02cdfc1b97979aa510562f01cc50368ec991898b97d Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.070561 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn"] Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.081781 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq"] Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.097191 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x8wmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-cnfjq_openstack-operators(84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.106896 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x8wmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-cnfjq_openstack-operators(84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.108109 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" podUID="84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6" Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.218372 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j"] Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.230143 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f"] Dec 02 20:27:49 crc kubenswrapper[4796]: W1202 20:27:49.235102 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b0e9209_3a80_4f10_9b56_4d3d28d0dee2.slice/crio-6886979a7daa9cc021ece5019c18d3687fc627c726238916c0a1829711f775f9 WatchSource:0}: Error finding container 6886979a7daa9cc021ece5019c18d3687fc627c726238916c0a1829711f775f9: Status 404 returned error can't find the container with id 6886979a7daa9cc021ece5019c18d3687fc627c726238916c0a1829711f775f9 Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.238994 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww"] Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.241887 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-brcbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-lgj8f_openstack-operators(0b0e9209-3a80-4f10-9b56-4d3d28d0dee2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.245201 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-brcbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-lgj8f_openstack-operators(0b0e9209-3a80-4f10-9b56-4d3d28d0dee2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.246606 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" podUID="0b0e9209-3a80-4f10-9b56-4d3d28d0dee2" Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.252894 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f"] Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.253995 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sgwx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-27kww_openstack-operators(0f460523-61e4-4cb6-9642-908cfa76579d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.255162 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww" podUID="0f460523-61e4-4cb6-9642-908cfa76579d" Dec 02 20:27:49 crc kubenswrapper[4796]: W1202 20:27:49.291280 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718dd3d4_158b_4dcd_912e_51e3cfa993e7.slice/crio-64d49430c6ae3fdbbb22722440f515420f437f0b40cc1713f11316c0cf4874e0 WatchSource:0}: Error finding container 64d49430c6ae3fdbbb22722440f515420f437f0b40cc1713f11316c0cf4874e0: Status 404 returned error can't find the container with id 64d49430c6ae3fdbbb22722440f515420f437f0b40cc1713f11316c0cf4874e0 Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.296438 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ck9mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-6wk6f_openstack-operators(07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 20:27:49 crc kubenswrapper[4796]: W1202 20:27:49.300669 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f4b078d_0d59_4f07_97d7_0537e71a7770.slice/crio-97c8a6f52ef1813c9d86a5d227b49bdc4c8a3ae360e96df2db7cc912a4462b6e WatchSource:0}: Error finding container 97c8a6f52ef1813c9d86a5d227b49bdc4c8a3ae360e96df2db7cc912a4462b6e: Status 404 returned error can't find the container with id 97c8a6f52ef1813c9d86a5d227b49bdc4c8a3ae360e96df2db7cc912a4462b6e Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.303351 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk"] Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.303411 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9"] Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.305340 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.113:5001/openstack-k8s-operators/watcher-operator:0e562967e0a192baf562500f49cef0abd8c6f6ec,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xw4g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-76f4f8cb8b-nfzhk_openstack-operators(718dd3d4-158b-4dcd-912e-51e3cfa993e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.305568 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ck9mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-6wk6f_openstack-operators(07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.306824 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" podUID="07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.308023 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xw4g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-76f4f8cb8b-nfzhk_openstack-operators(718dd3d4-158b-4dcd-912e-51e3cfa993e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.310484 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" podUID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.312873 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmkzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-tmdq9_openstack-operators(8f4b078d-0d59-4f07-97d7-0537e71a7770): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.314863 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmkzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-tmdq9_openstack-operators(8f4b078d-0d59-4f07-97d7-0537e71a7770): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.316160 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" podUID="8f4b078d-0d59-4f07-97d7-0537e71a7770" Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.359635 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9\" (UID: \"cefeb832-8af2-4be7-a143-a5ee5e28a091\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.359858 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.359911 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert podName:cefeb832-8af2-4be7-a143-a5ee5e28a091 nodeName:}" failed. No retries permitted until 2025-12-02 20:27:51.359893949 +0000 UTC m=+954.363269483 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" (UID: "cefeb832-8af2-4be7-a143-a5ee5e28a091") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.460542 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.460820 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.460746 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.461120 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs podName:3117ee94-1d46-46b4-b567-508d22bc6bac nodeName:}" failed. No retries permitted until 2025-12-02 20:27:51.461098377 +0000 UTC m=+954.464473911 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs") pod "openstack-operator-controller-manager-754dcb5d59-v9gsq" (UID: "3117ee94-1d46-46b4-b567-508d22bc6bac") : secret "webhook-server-cert" not found Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.461047 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.461524 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs podName:3117ee94-1d46-46b4-b567-508d22bc6bac nodeName:}" failed. No retries permitted until 2025-12-02 20:27:51.461513676 +0000 UTC m=+954.464889210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs") pod "openstack-operator-controller-manager-754dcb5d59-v9gsq" (UID: "3117ee94-1d46-46b4-b567-508d22bc6bac") : secret "metrics-server-cert" not found Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.808280 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl" event={"ID":"936371ab-213c-4f66-92ce-c9f5a79e3aa6","Type":"ContainerStarted","Data":"5d85ce8000717a405d42f02cdfc1b97979aa510562f01cc50368ec991898b97d"} Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.809999 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" event={"ID":"0b0e9209-3a80-4f10-9b56-4d3d28d0dee2","Type":"ContainerStarted","Data":"6886979a7daa9cc021ece5019c18d3687fc627c726238916c0a1829711f775f9"} Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.816528 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" event={"ID":"a5e8b895-e788-44f4-8481-520f1cbd75c0","Type":"ContainerStarted","Data":"e6de5e4c005f498d73a5d010b5717e2c5f31ae610a4570e9257254f86da911a0"} Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.817783 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" podUID="0b0e9209-3a80-4f10-9b56-4d3d28d0dee2" Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.818224 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww" event={"ID":"0f460523-61e4-4cb6-9642-908cfa76579d","Type":"ContainerStarted","Data":"7dd2064d105dd5c5874f91212577624215942af45f5d5f6234b957985211fafb"} Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.819806 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww" podUID="0f460523-61e4-4cb6-9642-908cfa76579d" Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.820632 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" event={"ID":"07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7","Type":"ContainerStarted","Data":"06d995ed440ab6e34dd3a4a550fae02f1ea4d4ce5854d4da3fc615f47f79253a"} Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.824118 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" podUID="07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7" Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.824712 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48" event={"ID":"4f7d1bce-8d2f-4f79-9b65-067b14abc6ca","Type":"ContainerStarted","Data":"d92e5b7558c335c61bc9476c1c7e3e7b8aaf9355278d0c8ebd81f73aad04e1cc"} Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.835090 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" event={"ID":"8f4b078d-0d59-4f07-97d7-0537e71a7770","Type":"ContainerStarted","Data":"97c8a6f52ef1813c9d86a5d227b49bdc4c8a3ae360e96df2db7cc912a4462b6e"} Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.839842 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" podUID="8f4b078d-0d59-4f07-97d7-0537e71a7770" Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.841896 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" event={"ID":"84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6","Type":"ContainerStarted","Data":"935e78ee4b787fe04def15d5be0024a0766e709e2cb2d69c0f8742f79f8f1536"} Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.843507 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" podUID="84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6" Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.849031 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j" event={"ID":"7b338d05-8a86-49bf-b996-71e686d384b2","Type":"ContainerStarted","Data":"87a02f9d1d1e8248719497298c3e2268fc6e4a95c1580ecd0c7f61861edafd94"} Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.856106 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk" event={"ID":"1180fd08-546d-431c-9583-10fef2f94b1f","Type":"ContainerStarted","Data":"d04c06eb2c4a5c3fa6997c9b8f1935acc8b9c09d1a5ec5fd37ed8d69ad8f4bfd"} Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.859204 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m" event={"ID":"87e62b79-fb94-4209-88b2-c2b6b0966181","Type":"ContainerStarted","Data":"c07f29fea1e506aae371b627ade066f63b64f10685564a18596e499a6882aa30"} Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.860788 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct" event={"ID":"8fa1598d-65f1-4841-8609-2c07e7dc8ffd","Type":"ContainerStarted","Data":"9dcd3d5d60803c6fa0c4dee52c30025a2a606b2e53c6188982c19db4053c01ee"} Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.864968 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px" event={"ID":"741037fd-f9c3-4461-8f0c-d94f1f869ec0","Type":"ContainerStarted","Data":"6448788c495d3812a1f7af25fea42dd29701c637f1b40dc1cc3cfa87bf1d26a5"} Dec 02 20:27:49 crc kubenswrapper[4796]: I1202 20:27:49.874501 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" event={"ID":"718dd3d4-158b-4dcd-912e-51e3cfa993e7","Type":"ContainerStarted","Data":"64d49430c6ae3fdbbb22722440f515420f437f0b40cc1713f11316c0cf4874e0"} Dec 02 20:27:49 crc kubenswrapper[4796]: E1202 20:27:49.876818 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.113:5001/openstack-k8s-operators/watcher-operator:0e562967e0a192baf562500f49cef0abd8c6f6ec\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" podUID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" Dec 02 20:27:50 crc kubenswrapper[4796]: I1202 20:27:50.018296 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:27:50 crc kubenswrapper[4796]: I1202 20:27:50.068192 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzcbh"] Dec 02 20:27:50 crc kubenswrapper[4796]: I1202 20:27:50.779143 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert\") pod \"infra-operator-controller-manager-57548d458d-n5m58\" (UID: \"98ce5936-d2d0-480f-bbd6-79e074aa862c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:27:50 crc kubenswrapper[4796]: E1202 20:27:50.779383 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 20:27:50 crc kubenswrapper[4796]: E1202 20:27:50.779486 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert podName:98ce5936-d2d0-480f-bbd6-79e074aa862c nodeName:}" failed. No retries permitted until 2025-12-02 20:27:54.779463578 +0000 UTC m=+957.782839112 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert") pod "infra-operator-controller-manager-57548d458d-n5m58" (UID: "98ce5936-d2d0-480f-bbd6-79e074aa862c") : secret "infra-operator-webhook-server-cert" not found Dec 02 20:27:50 crc kubenswrapper[4796]: E1202 20:27:50.893132 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww" podUID="0f460523-61e4-4cb6-9642-908cfa76579d" Dec 02 20:27:50 crc kubenswrapper[4796]: E1202 20:27:50.894813 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" podUID="84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6" Dec 02 20:27:50 crc kubenswrapper[4796]: E1202 20:27:50.894841 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" podUID="07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7" Dec 02 20:27:50 crc kubenswrapper[4796]: E1202 20:27:50.895462 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.113:5001/openstack-k8s-operators/watcher-operator:0e562967e0a192baf562500f49cef0abd8c6f6ec\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" podUID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" Dec 02 20:27:50 crc kubenswrapper[4796]: E1202 20:27:50.896003 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" podUID="0b0e9209-3a80-4f10-9b56-4d3d28d0dee2" Dec 02 20:27:50 crc kubenswrapper[4796]: E1202 20:27:50.896330 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" podUID="8f4b078d-0d59-4f07-97d7-0537e71a7770" Dec 02 20:27:51 crc kubenswrapper[4796]: I1202 20:27:51.401574 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9\" (UID: \"cefeb832-8af2-4be7-a143-a5ee5e28a091\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:27:51 crc kubenswrapper[4796]: E1202 20:27:51.401738 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 20:27:51 crc kubenswrapper[4796]: E1202 20:27:51.401792 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert podName:cefeb832-8af2-4be7-a143-a5ee5e28a091 nodeName:}" failed. No retries permitted until 2025-12-02 20:27:55.401776553 +0000 UTC m=+958.405152087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" (UID: "cefeb832-8af2-4be7-a143-a5ee5e28a091") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 20:27:51 crc kubenswrapper[4796]: I1202 20:27:51.502888 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:51 crc kubenswrapper[4796]: I1202 20:27:51.502947 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:51 crc kubenswrapper[4796]: E1202 20:27:51.503065 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 20:27:51 crc kubenswrapper[4796]: E1202 20:27:51.503117 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs podName:3117ee94-1d46-46b4-b567-508d22bc6bac nodeName:}" failed. No retries permitted until 2025-12-02 20:27:55.503100742 +0000 UTC m=+958.506476276 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs") pod "openstack-operator-controller-manager-754dcb5d59-v9gsq" (UID: "3117ee94-1d46-46b4-b567-508d22bc6bac") : secret "metrics-server-cert" not found Dec 02 20:27:51 crc kubenswrapper[4796]: E1202 20:27:51.503162 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 20:27:51 crc kubenswrapper[4796]: E1202 20:27:51.503182 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs podName:3117ee94-1d46-46b4-b567-508d22bc6bac nodeName:}" failed. No retries permitted until 2025-12-02 20:27:55.503174234 +0000 UTC m=+958.506549768 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs") pod "openstack-operator-controller-manager-754dcb5d59-v9gsq" (UID: "3117ee94-1d46-46b4-b567-508d22bc6bac") : secret "webhook-server-cert" not found Dec 02 20:27:51 crc kubenswrapper[4796]: I1202 20:27:51.899880 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hzcbh" podUID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" containerName="registry-server" containerID="cri-o://cda13d3fdfd6a3ebf876dc3c98331dfa3e9ce84f68e9ef6762bb956f0b2ad978" gracePeriod=2 Dec 02 20:27:52 crc kubenswrapper[4796]: I1202 20:27:52.922819 4796 generic.go:334] "Generic (PLEG): container finished" podID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" containerID="cda13d3fdfd6a3ebf876dc3c98331dfa3e9ce84f68e9ef6762bb956f0b2ad978" exitCode=0 Dec 02 20:27:52 crc kubenswrapper[4796]: I1202 20:27:52.923213 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzcbh" event={"ID":"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef","Type":"ContainerDied","Data":"cda13d3fdfd6a3ebf876dc3c98331dfa3e9ce84f68e9ef6762bb956f0b2ad978"} Dec 02 20:27:54 crc kubenswrapper[4796]: I1202 20:27:54.850180 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert\") pod \"infra-operator-controller-manager-57548d458d-n5m58\" (UID: \"98ce5936-d2d0-480f-bbd6-79e074aa862c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:27:54 crc kubenswrapper[4796]: E1202 20:27:54.850375 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 20:27:54 crc kubenswrapper[4796]: E1202 20:27:54.850677 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert podName:98ce5936-d2d0-480f-bbd6-79e074aa862c nodeName:}" failed. No retries permitted until 2025-12-02 20:28:02.85065945 +0000 UTC m=+965.854034984 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert") pod "infra-operator-controller-manager-57548d458d-n5m58" (UID: "98ce5936-d2d0-480f-bbd6-79e074aa862c") : secret "infra-operator-webhook-server-cert" not found Dec 02 20:27:55 crc kubenswrapper[4796]: I1202 20:27:55.189175 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:27:55 crc kubenswrapper[4796]: I1202 20:27:55.189291 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:27:55 crc kubenswrapper[4796]: I1202 20:27:55.189357 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:27:55 crc kubenswrapper[4796]: I1202 20:27:55.190305 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d6f0c135e487e19c4f958756870ce83ded4504e5b54dacbb97a36ee8b0a0032"} pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 20:27:55 crc kubenswrapper[4796]: I1202 20:27:55.190414 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" containerID="cri-o://8d6f0c135e487e19c4f958756870ce83ded4504e5b54dacbb97a36ee8b0a0032" gracePeriod=600 Dec 02 20:27:55 crc kubenswrapper[4796]: I1202 20:27:55.457533 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9\" (UID: \"cefeb832-8af2-4be7-a143-a5ee5e28a091\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:27:55 crc kubenswrapper[4796]: E1202 20:27:55.457763 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 20:27:55 crc kubenswrapper[4796]: E1202 20:27:55.458250 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert podName:cefeb832-8af2-4be7-a143-a5ee5e28a091 nodeName:}" failed. No retries permitted until 2025-12-02 20:28:03.458219748 +0000 UTC m=+966.461595312 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" (UID: "cefeb832-8af2-4be7-a143-a5ee5e28a091") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 20:27:55 crc kubenswrapper[4796]: I1202 20:27:55.558892 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:55 crc kubenswrapper[4796]: I1202 20:27:55.558967 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:27:55 crc kubenswrapper[4796]: E1202 20:27:55.559119 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 20:27:55 crc kubenswrapper[4796]: E1202 20:27:55.559209 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs podName:3117ee94-1d46-46b4-b567-508d22bc6bac nodeName:}" failed. No retries permitted until 2025-12-02 20:28:03.559186719 +0000 UTC m=+966.562562263 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs") pod "openstack-operator-controller-manager-754dcb5d59-v9gsq" (UID: "3117ee94-1d46-46b4-b567-508d22bc6bac") : secret "metrics-server-cert" not found Dec 02 20:27:55 crc kubenswrapper[4796]: E1202 20:27:55.560176 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 20:27:55 crc kubenswrapper[4796]: E1202 20:27:55.560240 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs podName:3117ee94-1d46-46b4-b567-508d22bc6bac nodeName:}" failed. No retries permitted until 2025-12-02 20:28:03.560227634 +0000 UTC m=+966.563603178 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs") pod "openstack-operator-controller-manager-754dcb5d59-v9gsq" (UID: "3117ee94-1d46-46b4-b567-508d22bc6bac") : secret "webhook-server-cert" not found Dec 02 20:27:55 crc kubenswrapper[4796]: I1202 20:27:55.971341 4796 generic.go:334] "Generic (PLEG): container finished" podID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerID="8d6f0c135e487e19c4f958756870ce83ded4504e5b54dacbb97a36ee8b0a0032" exitCode=0 Dec 02 20:27:55 crc kubenswrapper[4796]: I1202 20:27:55.971414 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerDied","Data":"8d6f0c135e487e19c4f958756870ce83ded4504e5b54dacbb97a36ee8b0a0032"} Dec 02 20:27:55 crc kubenswrapper[4796]: I1202 20:27:55.971507 4796 scope.go:117] "RemoveContainer" containerID="a547bba042a4ae0a5c7b160e108564e0b4924894b3f1b07d2ef5933a2669d856" Dec 02 20:27:58 crc kubenswrapper[4796]: E1202 20:27:58.794639 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cda13d3fdfd6a3ebf876dc3c98331dfa3e9ce84f68e9ef6762bb956f0b2ad978 is running failed: container process not found" containerID="cda13d3fdfd6a3ebf876dc3c98331dfa3e9ce84f68e9ef6762bb956f0b2ad978" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 20:27:58 crc kubenswrapper[4796]: E1202 20:27:58.795502 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cda13d3fdfd6a3ebf876dc3c98331dfa3e9ce84f68e9ef6762bb956f0b2ad978 is running failed: container process not found" containerID="cda13d3fdfd6a3ebf876dc3c98331dfa3e9ce84f68e9ef6762bb956f0b2ad978" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 20:27:58 crc kubenswrapper[4796]: E1202 20:27:58.795878 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cda13d3fdfd6a3ebf876dc3c98331dfa3e9ce84f68e9ef6762bb956f0b2ad978 is running failed: container process not found" containerID="cda13d3fdfd6a3ebf876dc3c98331dfa3e9ce84f68e9ef6762bb956f0b2ad978" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 20:27:58 crc kubenswrapper[4796]: E1202 20:27:58.795945 4796 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cda13d3fdfd6a3ebf876dc3c98331dfa3e9ce84f68e9ef6762bb956f0b2ad978 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-hzcbh" podUID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" containerName="registry-server" Dec 02 20:28:00 crc kubenswrapper[4796]: E1202 20:28:00.337664 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 02 20:28:00 crc kubenswrapper[4796]: E1202 20:28:00.338160 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nbxt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-shlwt_openstack-operators(8cff73ad-f4f3-47a7-8ba1-985614f757a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 20:28:02 crc kubenswrapper[4796]: I1202 20:28:02.888941 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert\") pod \"infra-operator-controller-manager-57548d458d-n5m58\" (UID: \"98ce5936-d2d0-480f-bbd6-79e074aa862c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:28:02 crc kubenswrapper[4796]: I1202 20:28:02.901885 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ce5936-d2d0-480f-bbd6-79e074aa862c-cert\") pod \"infra-operator-controller-manager-57548d458d-n5m58\" (UID: \"98ce5936-d2d0-480f-bbd6-79e074aa862c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:28:03 crc kubenswrapper[4796]: I1202 20:28:03.167295 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-n88zw" Dec 02 20:28:03 crc kubenswrapper[4796]: I1202 20:28:03.174972 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:28:03 crc kubenswrapper[4796]: I1202 20:28:03.501148 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9\" (UID: \"cefeb832-8af2-4be7-a143-a5ee5e28a091\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:28:03 crc kubenswrapper[4796]: I1202 20:28:03.509202 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cefeb832-8af2-4be7-a143-a5ee5e28a091-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9\" (UID: \"cefeb832-8af2-4be7-a143-a5ee5e28a091\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:28:03 crc kubenswrapper[4796]: I1202 20:28:03.602746 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:28:03 crc kubenswrapper[4796]: I1202 20:28:03.602852 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:28:03 crc kubenswrapper[4796]: E1202 20:28:03.602961 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 20:28:03 crc kubenswrapper[4796]: E1202 20:28:03.603223 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs podName:3117ee94-1d46-46b4-b567-508d22bc6bac nodeName:}" failed. No retries permitted until 2025-12-02 20:28:19.603201032 +0000 UTC m=+982.606576576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs") pod "openstack-operator-controller-manager-754dcb5d59-v9gsq" (UID: "3117ee94-1d46-46b4-b567-508d22bc6bac") : secret "webhook-server-cert" not found Dec 02 20:28:03 crc kubenswrapper[4796]: E1202 20:28:03.603355 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 20:28:03 crc kubenswrapper[4796]: E1202 20:28:03.603510 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs podName:3117ee94-1d46-46b4-b567-508d22bc6bac nodeName:}" failed. No retries permitted until 2025-12-02 20:28:19.603465368 +0000 UTC m=+982.606840892 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs") pod "openstack-operator-controller-manager-754dcb5d59-v9gsq" (UID: "3117ee94-1d46-46b4-b567-508d22bc6bac") : secret "metrics-server-cert" not found Dec 02 20:28:03 crc kubenswrapper[4796]: I1202 20:28:03.777123 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7gt8q" Dec 02 20:28:03 crc kubenswrapper[4796]: I1202 20:28:03.785311 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:28:04 crc kubenswrapper[4796]: E1202 20:28:04.455852 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 02 20:28:04 crc kubenswrapper[4796]: E1202 20:28:04.456210 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6zzmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-5zl48_openstack-operators(4f7d1bce-8d2f-4f79-9b65-067b14abc6ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 20:28:05 crc kubenswrapper[4796]: E1202 20:28:05.497618 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" Dec 02 20:28:05 crc kubenswrapper[4796]: E1202 20:28:05.497969 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j58zr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-tsrx8_openstack-operators(9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 20:28:06 crc kubenswrapper[4796]: E1202 20:28:06.301842 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" Dec 02 20:28:06 crc kubenswrapper[4796]: E1202 20:28:06.302568 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-knmdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-grxnn_openstack-operators(ac795881-0ae6-43cf-9a1d-119408238bf6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 20:28:07 crc kubenswrapper[4796]: E1202 20:28:07.211203 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Dec 02 20:28:07 crc kubenswrapper[4796]: E1202 20:28:07.212695 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kfz49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-bv8px_openstack-operators(741037fd-f9c3-4461-8f0c-d94f1f869ec0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 20:28:08 crc kubenswrapper[4796]: E1202 20:28:08.377678 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 02 20:28:08 crc kubenswrapper[4796]: E1202 20:28:08.378133 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xnc4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-qccgk_openstack-operators(1180fd08-546d-431c-9583-10fef2f94b1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 20:28:08 crc kubenswrapper[4796]: I1202 20:28:08.552022 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:28:08 crc kubenswrapper[4796]: I1202 20:28:08.609689 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn725\" (UniqueName: \"kubernetes.io/projected/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-kube-api-access-dn725\") pod \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\" (UID: \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\") " Dec 02 20:28:08 crc kubenswrapper[4796]: I1202 20:28:08.609744 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-utilities\") pod \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\" (UID: \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\") " Dec 02 20:28:08 crc kubenswrapper[4796]: I1202 20:28:08.609767 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-catalog-content\") pod \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\" (UID: \"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef\") " Dec 02 20:28:08 crc kubenswrapper[4796]: I1202 20:28:08.616657 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-utilities" (OuterVolumeSpecName: "utilities") pod "a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" (UID: "a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:28:08 crc kubenswrapper[4796]: I1202 20:28:08.616987 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-kube-api-access-dn725" (OuterVolumeSpecName: "kube-api-access-dn725") pod "a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" (UID: "a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef"). InnerVolumeSpecName "kube-api-access-dn725". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:28:08 crc kubenswrapper[4796]: I1202 20:28:08.672919 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" (UID: "a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:28:08 crc kubenswrapper[4796]: I1202 20:28:08.711089 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn725\" (UniqueName: \"kubernetes.io/projected/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-kube-api-access-dn725\") on node \"crc\" DevicePath \"\"" Dec 02 20:28:08 crc kubenswrapper[4796]: I1202 20:28:08.711118 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:28:08 crc kubenswrapper[4796]: I1202 20:28:08.711131 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:28:09 crc kubenswrapper[4796]: I1202 20:28:09.086761 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzcbh" event={"ID":"a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef","Type":"ContainerDied","Data":"27c6bf558d22deed9966971e86146376a03e328429eae43e31d40bdf62b56e0d"} Dec 02 20:28:09 crc kubenswrapper[4796]: I1202 20:28:09.087058 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzcbh" Dec 02 20:28:09 crc kubenswrapper[4796]: I1202 20:28:09.123501 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzcbh"] Dec 02 20:28:09 crc kubenswrapper[4796]: I1202 20:28:09.130867 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hzcbh"] Dec 02 20:28:09 crc kubenswrapper[4796]: I1202 20:28:09.279776 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" path="/var/lib/kubelet/pods/a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef/volumes" Dec 02 20:28:11 crc kubenswrapper[4796]: I1202 20:28:11.253153 4796 scope.go:117] "RemoveContainer" containerID="cda13d3fdfd6a3ebf876dc3c98331dfa3e9ce84f68e9ef6762bb956f0b2ad978" Dec 02 20:28:13 crc kubenswrapper[4796]: I1202 20:28:13.822398 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-n5m58"] Dec 02 20:28:13 crc kubenswrapper[4796]: I1202 20:28:13.885002 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9"] Dec 02 20:28:14 crc kubenswrapper[4796]: W1202 20:28:14.487489 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98ce5936_d2d0_480f_bbd6_79e074aa862c.slice/crio-4a55da7bd6ecfe161171479587c243bbee1efaf29c8804bc68aa82e58f790a16 WatchSource:0}: Error finding container 4a55da7bd6ecfe161171479587c243bbee1efaf29c8804bc68aa82e58f790a16: Status 404 returned error can't find the container with id 4a55da7bd6ecfe161171479587c243bbee1efaf29c8804bc68aa82e58f790a16 Dec 02 20:28:14 crc kubenswrapper[4796]: I1202 20:28:14.499549 4796 scope.go:117] "RemoveContainer" containerID="2cf9fbd2aa773dd6b1beb8724b0266415b35435f7d630a0eaf7688c0ba9eb74b" Dec 02 20:28:14 crc kubenswrapper[4796]: I1202 20:28:14.718311 4796 scope.go:117] "RemoveContainer" containerID="0f6fcef609370d010706753cdd643bee88a5345f684e4d9012f064f163ffc949" Dec 02 20:28:15 crc kubenswrapper[4796]: I1202 20:28:15.136794 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl" event={"ID":"936371ab-213c-4f66-92ce-c9f5a79e3aa6","Type":"ContainerStarted","Data":"370f3df23a7cc31fbc0183ec03dadd112c3cbe8bdd4eb190004c462638ce487a"} Dec 02 20:28:15 crc kubenswrapper[4796]: I1202 20:28:15.144972 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" event={"ID":"cefeb832-8af2-4be7-a143-a5ee5e28a091","Type":"ContainerStarted","Data":"49d3c580fd936f237e0807482c5545892bcb5386d0592801029bc910bf34c9d1"} Dec 02 20:28:15 crc kubenswrapper[4796]: I1202 20:28:15.155340 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j" event={"ID":"7b338d05-8a86-49bf-b996-71e686d384b2","Type":"ContainerStarted","Data":"87ff0806129d6e5b3792e02b23de21733ad59ba628aa6c1345aa16b5da456fc5"} Dec 02 20:28:15 crc kubenswrapper[4796]: I1202 20:28:15.158822 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerStarted","Data":"81e0968c57ec6d9b11845db69e201783fcaa5e46b23de5768e000d45505c4ab7"} Dec 02 20:28:15 crc kubenswrapper[4796]: I1202 20:28:15.164640 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" event={"ID":"98ce5936-d2d0-480f-bbd6-79e074aa862c","Type":"ContainerStarted","Data":"4a55da7bd6ecfe161171479587c243bbee1efaf29c8804bc68aa82e58f790a16"} Dec 02 20:28:15 crc kubenswrapper[4796]: I1202 20:28:15.183214 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m" event={"ID":"87e62b79-fb94-4209-88b2-c2b6b0966181","Type":"ContainerStarted","Data":"a8f7a513c6b03083e905935cfc5eeca3c37b2fa504831fc8256a374e78df8ff9"} Dec 02 20:28:15 crc kubenswrapper[4796]: I1202 20:28:15.193642 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" event={"ID":"a5e8b895-e788-44f4-8481-520f1cbd75c0","Type":"ContainerStarted","Data":"cb96c76601c34399b785e43b733036d1bb74cc0939a17e157eb6631ac303d6f5"} Dec 02 20:28:15 crc kubenswrapper[4796]: I1202 20:28:15.210083 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct" event={"ID":"8fa1598d-65f1-4841-8609-2c07e7dc8ffd","Type":"ContainerStarted","Data":"a279da80fb4371800763ee21beb2a50cab86834528d369ef14da7e16f006683c"} Dec 02 20:28:15 crc kubenswrapper[4796]: I1202 20:28:15.226661 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw" event={"ID":"8161682e-0c53-41ab-bef7-99766302c3eb","Type":"ContainerStarted","Data":"e6a165881e5f28929207de5786fd440a24d59368cb90ab480c99a863ef071386"} Dec 02 20:28:15 crc kubenswrapper[4796]: I1202 20:28:15.233042 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h" event={"ID":"b751e4e4-8d34-4fcc-baca-1e0eea85f1b9","Type":"ContainerStarted","Data":"c5d0d9b1b810702281df3e63f2efb5cca6c3db115fb3a4c0c7ddb81993ceabc7"} Dec 02 20:28:17 crc kubenswrapper[4796]: I1202 20:28:17.276406 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8" event={"ID":"c4bff453-90ae-481c-8027-5eca98e48917","Type":"ContainerStarted","Data":"be1c5d396beb746395b6000d7cad1827f023a7bbea79853f5f7e59aeecba60eb"} Dec 02 20:28:18 crc kubenswrapper[4796]: I1202 20:28:18.279048 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" event={"ID":"8f4b078d-0d59-4f07-97d7-0537e71a7770","Type":"ContainerStarted","Data":"c3759e7ccb40c5b3faa9c63d8c574066d271df2ac112a6b3b16d663571a2adb3"} Dec 02 20:28:18 crc kubenswrapper[4796]: I1202 20:28:18.281502 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" event={"ID":"0b0e9209-3a80-4f10-9b56-4d3d28d0dee2","Type":"ContainerStarted","Data":"1e008d96963f18fdaa103977d4d3b956c96b171ff7e81fc8e42a126cc5539e24"} Dec 02 20:28:19 crc kubenswrapper[4796]: I1202 20:28:19.289729 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" event={"ID":"07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7","Type":"ContainerStarted","Data":"56c94c3b4d697d7eb72b3bb028becce1de612a26b75e8b08d6b6de48ee45b7f7"} Dec 02 20:28:19 crc kubenswrapper[4796]: I1202 20:28:19.292066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" event={"ID":"84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6","Type":"ContainerStarted","Data":"8a99ba37c9b3cbedc082ac07fa02fa17ccf02f9fcdbaf6649db7f17edd4f66ab"} Dec 02 20:28:19 crc kubenswrapper[4796]: I1202 20:28:19.701343 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:28:19 crc kubenswrapper[4796]: I1202 20:28:19.701692 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:28:19 crc kubenswrapper[4796]: I1202 20:28:19.707779 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-metrics-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:28:19 crc kubenswrapper[4796]: I1202 20:28:19.710012 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3117ee94-1d46-46b4-b567-508d22bc6bac-webhook-certs\") pod \"openstack-operator-controller-manager-754dcb5d59-v9gsq\" (UID: \"3117ee94-1d46-46b4-b567-508d22bc6bac\") " pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:28:19 crc kubenswrapper[4796]: I1202 20:28:19.950188 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jp5hs" Dec 02 20:28:19 crc kubenswrapper[4796]: I1202 20:28:19.958300 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:28:21 crc kubenswrapper[4796]: E1202 20:28:21.786384 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8" podUID="9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77" Dec 02 20:28:21 crc kubenswrapper[4796]: E1202 20:28:21.905901 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48" podUID="4f7d1bce-8d2f-4f79-9b65-067b14abc6ca" Dec 02 20:28:22 crc kubenswrapper[4796]: E1202 20:28:22.022610 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px" podUID="741037fd-f9c3-4461-8f0c-d94f1f869ec0" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.034892 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq"] Dec 02 20:28:22 crc kubenswrapper[4796]: E1202 20:28:22.207469 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk" podUID="1180fd08-546d-431c-9583-10fef2f94b1f" Dec 02 20:28:22 crc kubenswrapper[4796]: E1202 20:28:22.218157 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt" podUID="8cff73ad-f4f3-47a7-8ba1-985614f757a3" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.358968 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk" event={"ID":"1180fd08-546d-431c-9583-10fef2f94b1f","Type":"ContainerStarted","Data":"a8898d30b1e2f5c17f87a109e0d7db03018ad1ff8b6997ae9cc0c0f3a044f6a3"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.385271 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt" event={"ID":"8cff73ad-f4f3-47a7-8ba1-985614f757a3","Type":"ContainerStarted","Data":"efc85afa07a0c6e7c9e997677ec15cfa75bf64efd684424a1ef86e3962680129"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.406064 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww" event={"ID":"0f460523-61e4-4cb6-9642-908cfa76579d","Type":"ContainerStarted","Data":"4b285eea95c44576e5b46adb103460e6fd99631cadc9f2beee7a000ed06d82fb"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.414931 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" event={"ID":"718dd3d4-158b-4dcd-912e-51e3cfa993e7","Type":"ContainerStarted","Data":"ed7ceec4dc6ad9fa5b7cc84e6de794343b7d522dc3537de8792b5475f016ffd5"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.415588 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.417144 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" event={"ID":"3117ee94-1d46-46b4-b567-508d22bc6bac","Type":"ContainerStarted","Data":"25806566c8c1f4ad00c59ed8e5de753e65c0b2cd01de24af34ac3130aa19b838"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.438586 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl" event={"ID":"936371ab-213c-4f66-92ce-c9f5a79e3aa6","Type":"ContainerStarted","Data":"6d03dabcbbd8648b4e3913ec32e628ec61ff5ee4d346637a3be3a99f38a37401"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.438902 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.452975 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" podStartSLOduration=9.537578472 podStartE2EDuration="35.452954336s" podCreationTimestamp="2025-12-02 20:27:47 +0000 UTC" firstStartedPulling="2025-12-02 20:27:49.305115785 +0000 UTC m=+952.308491319" lastFinishedPulling="2025-12-02 20:28:15.220491649 +0000 UTC m=+978.223867183" observedRunningTime="2025-12-02 20:28:22.452561647 +0000 UTC m=+985.455937181" watchObservedRunningTime="2025-12-02 20:28:22.452954336 +0000 UTC m=+985.456329870" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.456892 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.463582 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8" event={"ID":"c4bff453-90ae-481c-8027-5eca98e48917","Type":"ContainerStarted","Data":"c55b492c3451be5daa4e7cb2a1bd3fabb139504101a0a18a3a4db6388793db15"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.464303 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.466966 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.476097 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" event={"ID":"cefeb832-8af2-4be7-a143-a5ee5e28a091","Type":"ContainerStarted","Data":"9445a0185f9952de410cb37bc643b40a0185ac573f312adebf45ec792a5e95e3"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.477035 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-27kww" podStartSLOduration=7.841125426 podStartE2EDuration="35.47700115s" podCreationTimestamp="2025-12-02 20:27:47 +0000 UTC" firstStartedPulling="2025-12-02 20:27:49.253842276 +0000 UTC m=+952.257217800" lastFinishedPulling="2025-12-02 20:28:16.88971799 +0000 UTC m=+979.893093524" observedRunningTime="2025-12-02 20:28:22.475750489 +0000 UTC m=+985.479126023" watchObservedRunningTime="2025-12-02 20:28:22.47700115 +0000 UTC m=+985.480376674" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.489648 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h" event={"ID":"b751e4e4-8d34-4fcc-baca-1e0eea85f1b9","Type":"ContainerStarted","Data":"e308d62a2da6d2b6bf65c9455fa4e5376f30cbc81929fac824fc44ebf3d1ccb0"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.491502 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.492081 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.509758 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-jwbj8" podStartSLOduration=3.443701293 podStartE2EDuration="36.509738564s" podCreationTimestamp="2025-12-02 20:27:46 +0000 UTC" firstStartedPulling="2025-12-02 20:27:48.513861037 +0000 UTC m=+951.517236571" lastFinishedPulling="2025-12-02 20:28:21.579898288 +0000 UTC m=+984.583273842" observedRunningTime="2025-12-02 20:28:22.498162603 +0000 UTC m=+985.501538157" watchObservedRunningTime="2025-12-02 20:28:22.509738564 +0000 UTC m=+985.513114098" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.511986 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" event={"ID":"98ce5936-d2d0-480f-bbd6-79e074aa862c","Type":"ContainerStarted","Data":"fa85f3c48168f5d80e753a1d21032833ee156c36d0694fceadf0a678af410515"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.512687 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.517528 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" event={"ID":"0b0e9209-3a80-4f10-9b56-4d3d28d0dee2","Type":"ContainerStarted","Data":"d776e33efae2c1650ef96b7e48bcdaa724a39cb717be8d8cc038ba0519d8e89c"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.519706 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.521994 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.527779 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48" event={"ID":"4f7d1bce-8d2f-4f79-9b65-067b14abc6ca","Type":"ContainerStarted","Data":"e08aaa508534dc77a73288b8f362d532a2e14ec08758869216e46ed68276a221"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.551202 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px" event={"ID":"741037fd-f9c3-4461-8f0c-d94f1f869ec0","Type":"ContainerStarted","Data":"770c82add904cccc0b6f99482be050ececcb2bd8e6539ff4812adae9fe12cde0"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.558164 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8" event={"ID":"9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77","Type":"ContainerStarted","Data":"d1b0a9283e341883ac53e1bb13ba17dfc7476bdad0fc75a2c5ea1e37823b8a2c"} Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.567145 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t8wwl" podStartSLOduration=3.003859863 podStartE2EDuration="35.567121886s" podCreationTimestamp="2025-12-02 20:27:47 +0000 UTC" firstStartedPulling="2025-12-02 20:27:49.088808216 +0000 UTC m=+952.092183750" lastFinishedPulling="2025-12-02 20:28:21.652070239 +0000 UTC m=+984.655445773" observedRunningTime="2025-12-02 20:28:22.529172955 +0000 UTC m=+985.532548489" watchObservedRunningTime="2025-12-02 20:28:22.567121886 +0000 UTC m=+985.570497420" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.624809 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8fc8h" podStartSLOduration=3.637960663 podStartE2EDuration="36.624784405s" podCreationTimestamp="2025-12-02 20:27:46 +0000 UTC" firstStartedPulling="2025-12-02 20:27:48.650662304 +0000 UTC m=+951.654037838" lastFinishedPulling="2025-12-02 20:28:21.637486036 +0000 UTC m=+984.640861580" observedRunningTime="2025-12-02 20:28:22.599800049 +0000 UTC m=+985.603175583" watchObservedRunningTime="2025-12-02 20:28:22.624784405 +0000 UTC m=+985.628159939" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.680796 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" podStartSLOduration=3.271531594 podStartE2EDuration="35.680764593s" podCreationTimestamp="2025-12-02 20:27:47 +0000 UTC" firstStartedPulling="2025-12-02 20:27:49.241752724 +0000 UTC m=+952.245128258" lastFinishedPulling="2025-12-02 20:28:21.650985713 +0000 UTC m=+984.654361257" observedRunningTime="2025-12-02 20:28:22.646735367 +0000 UTC m=+985.650110901" watchObservedRunningTime="2025-12-02 20:28:22.680764593 +0000 UTC m=+985.684140127" Dec 02 20:28:22 crc kubenswrapper[4796]: I1202 20:28:22.685812 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" podStartSLOduration=29.815495502 podStartE2EDuration="36.685790974s" podCreationTimestamp="2025-12-02 20:27:46 +0000 UTC" firstStartedPulling="2025-12-02 20:28:14.499688895 +0000 UTC m=+977.503064469" lastFinishedPulling="2025-12-02 20:28:21.369984407 +0000 UTC m=+984.373359941" observedRunningTime="2025-12-02 20:28:22.682274859 +0000 UTC m=+985.685650393" watchObservedRunningTime="2025-12-02 20:28:22.685790974 +0000 UTC m=+985.689166518" Dec 02 20:28:22 crc kubenswrapper[4796]: E1202 20:28:22.893799 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn" podUID="ac795881-0ae6-43cf-9a1d-119408238bf6" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.603523 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" event={"ID":"a5e8b895-e788-44f4-8481-520f1cbd75c0","Type":"ContainerStarted","Data":"25e8bdea4279af7dbb6b0e63b7ef6346681052d3d69c877bd420445f9291416e"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.604697 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.618953 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.622506 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw" event={"ID":"8161682e-0c53-41ab-bef7-99766302c3eb","Type":"ContainerStarted","Data":"943812dfda440fd0f064c3f01ac9ccf64bc00876349df78b0c5391c2a8e2f741"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.623508 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.639712 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" event={"ID":"cefeb832-8af2-4be7-a143-a5ee5e28a091","Type":"ContainerStarted","Data":"73f98182921a03664cbe9d5b3bdc15121eee1e219c4bad7e2624ae57551cb33b"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.640705 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.641295 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" podStartSLOduration=4.054560261 podStartE2EDuration="36.641243621s" podCreationTimestamp="2025-12-02 20:27:47 +0000 UTC" firstStartedPulling="2025-12-02 20:27:49.065242576 +0000 UTC m=+952.068618110" lastFinishedPulling="2025-12-02 20:28:21.651925946 +0000 UTC m=+984.655301470" observedRunningTime="2025-12-02 20:28:23.632382555 +0000 UTC m=+986.635758089" watchObservedRunningTime="2025-12-02 20:28:23.641243621 +0000 UTC m=+986.644619165" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.644637 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" event={"ID":"07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7","Type":"ContainerStarted","Data":"736be00421b8b176c30d183b0b9c250b5ddab46f68ac80201f69a0601684a1af"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.646069 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.655626 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.664540 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j" event={"ID":"7b338d05-8a86-49bf-b996-71e686d384b2","Type":"ContainerStarted","Data":"3f2166d298e568c8201b90082433b3500b18760dcfdcc4c7b9d0b503a43ad8b0"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.665727 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.666658 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.671862 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.701589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" event={"ID":"8f4b078d-0d59-4f07-97d7-0537e71a7770","Type":"ContainerStarted","Data":"0d01b4fb03b97a3f6d1bf0b0e936ebdd2bf22a7b4406f5b67738c7341aba555f"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.741267 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cshfw" podStartSLOduration=4.700018463 podStartE2EDuration="37.741226376s" podCreationTimestamp="2025-12-02 20:27:46 +0000 UTC" firstStartedPulling="2025-12-02 20:27:48.551272711 +0000 UTC m=+951.554648245" lastFinishedPulling="2025-12-02 20:28:21.592480624 +0000 UTC m=+984.595856158" observedRunningTime="2025-12-02 20:28:23.718451054 +0000 UTC m=+986.721826588" watchObservedRunningTime="2025-12-02 20:28:23.741226376 +0000 UTC m=+986.744601910" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.704170 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.754358 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.754371 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct" event={"ID":"8fa1598d-65f1-4841-8609-2c07e7dc8ffd","Type":"ContainerStarted","Data":"e185c60d97924bfd214435de512e9780c7dd89e93de3596f340417c9fb7ac892"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.754399 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.754439 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.764031 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" event={"ID":"84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6","Type":"ContainerStarted","Data":"ef1d0ccc64842486520c53a9d5b1e920d3935ed46e78035f7f5b4f0cae0d4365"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.764605 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.803616 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.831579 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-j9v7j" podStartSLOduration=4.471328139 podStartE2EDuration="36.831562228s" podCreationTimestamp="2025-12-02 20:27:47 +0000 UTC" firstStartedPulling="2025-12-02 20:27:49.219471125 +0000 UTC m=+952.222846659" lastFinishedPulling="2025-12-02 20:28:21.579705204 +0000 UTC m=+984.583080748" observedRunningTime="2025-12-02 20:28:23.798989517 +0000 UTC m=+986.802365051" watchObservedRunningTime="2025-12-02 20:28:23.831562228 +0000 UTC m=+986.834937762" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.832066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" event={"ID":"718dd3d4-158b-4dcd-912e-51e3cfa993e7","Type":"ContainerStarted","Data":"5021094da3f08a8f13e263e3c6bd92501c79b859c5434ca22038eeb2fd948e55"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.851623 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tmdq9" podStartSLOduration=4.512196642 podStartE2EDuration="36.851598123s" podCreationTimestamp="2025-12-02 20:27:47 +0000 UTC" firstStartedPulling="2025-12-02 20:27:49.312680439 +0000 UTC m=+952.316055973" lastFinishedPulling="2025-12-02 20:28:21.65208191 +0000 UTC m=+984.655457454" observedRunningTime="2025-12-02 20:28:23.829075437 +0000 UTC m=+986.832450971" watchObservedRunningTime="2025-12-02 20:28:23.851598123 +0000 UTC m=+986.854973657" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.859452 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" event={"ID":"98ce5936-d2d0-480f-bbd6-79e074aa862c","Type":"ContainerStarted","Data":"1eb3e17e00af2993f58d001efb05be1316039ebe1431dc978279301fffd49454"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.870666 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6wk6f" podStartSLOduration=4.513452484 podStartE2EDuration="36.870646036s" podCreationTimestamp="2025-12-02 20:27:47 +0000 UTC" firstStartedPulling="2025-12-02 20:27:49.296157709 +0000 UTC m=+952.299533233" lastFinishedPulling="2025-12-02 20:28:21.653351241 +0000 UTC m=+984.656726785" observedRunningTime="2025-12-02 20:28:23.85187492 +0000 UTC m=+986.855250454" watchObservedRunningTime="2025-12-02 20:28:23.870646036 +0000 UTC m=+986.874021570" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.883621 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" event={"ID":"3117ee94-1d46-46b4-b567-508d22bc6bac","Type":"ContainerStarted","Data":"af007d3f66a4d9326df4831396152912b978bd210eed8684edf2d499baf2591d"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.884372 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.900573 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn" event={"ID":"ac795881-0ae6-43cf-9a1d-119408238bf6","Type":"ContainerStarted","Data":"8ae3a678b6d86b13bfe253cd641f54dea3909102a6c635cce9920c79eef88486"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.909544 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" podStartSLOduration=30.03935958 podStartE2EDuration="36.909521018s" podCreationTimestamp="2025-12-02 20:27:47 +0000 UTC" firstStartedPulling="2025-12-02 20:28:14.499713266 +0000 UTC m=+977.503088800" lastFinishedPulling="2025-12-02 20:28:21.369874704 +0000 UTC m=+984.373250238" observedRunningTime="2025-12-02 20:28:23.909380685 +0000 UTC m=+986.912756219" watchObservedRunningTime="2025-12-02 20:28:23.909521018 +0000 UTC m=+986.912896562" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.922848 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m" event={"ID":"87e62b79-fb94-4209-88b2-c2b6b0966181","Type":"ContainerStarted","Data":"ae054af86eb4a0bcb9415e91a645f62163c46129adfe1917fa56dee4ed4367de"} Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.922908 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m" Dec 02 20:28:23 crc kubenswrapper[4796]: I1202 20:28:23.943233 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m" Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.011428 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" podStartSLOduration=37.011402339 podStartE2EDuration="37.011402339s" podCreationTimestamp="2025-12-02 20:27:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:28:23.990067452 +0000 UTC m=+986.993442986" watchObservedRunningTime="2025-12-02 20:28:24.011402339 +0000 UTC m=+987.014777873" Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.041396 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cnfjq" podStartSLOduration=5.424979813 podStartE2EDuration="38.041375826s" podCreationTimestamp="2025-12-02 20:27:46 +0000 UTC" firstStartedPulling="2025-12-02 20:27:49.096986384 +0000 UTC m=+952.100361928" lastFinishedPulling="2025-12-02 20:28:21.713382407 +0000 UTC m=+984.716757941" observedRunningTime="2025-12-02 20:28:24.038812075 +0000 UTC m=+987.042187609" watchObservedRunningTime="2025-12-02 20:28:24.041375826 +0000 UTC m=+987.044751360" Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.086864 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-4qgct" podStartSLOduration=5.477651997 podStartE2EDuration="38.08684155s" podCreationTimestamp="2025-12-02 20:27:46 +0000 UTC" firstStartedPulling="2025-12-02 20:27:49.041336229 +0000 UTC m=+952.044711773" lastFinishedPulling="2025-12-02 20:28:21.650525772 +0000 UTC m=+984.653901326" observedRunningTime="2025-12-02 20:28:24.077191825 +0000 UTC m=+987.080567359" watchObservedRunningTime="2025-12-02 20:28:24.08684155 +0000 UTC m=+987.090217074" Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.155723 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-s988m" podStartSLOduration=5.383300552 podStartE2EDuration="38.15570432s" podCreationTimestamp="2025-12-02 20:27:46 +0000 UTC" firstStartedPulling="2025-12-02 20:27:48.858522509 +0000 UTC m=+951.861898043" lastFinishedPulling="2025-12-02 20:28:21.630926267 +0000 UTC m=+984.634301811" observedRunningTime="2025-12-02 20:28:24.153706002 +0000 UTC m=+987.157081536" watchObservedRunningTime="2025-12-02 20:28:24.15570432 +0000 UTC m=+987.159079854" Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.932552 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk" event={"ID":"1180fd08-546d-431c-9583-10fef2f94b1f","Type":"ContainerStarted","Data":"4c798a0dfcf19f8b647767947bdab358f04a7d85d7193727b85a63f7cfd0bdd9"} Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.933314 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk" Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.934824 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn" event={"ID":"ac795881-0ae6-43cf-9a1d-119408238bf6","Type":"ContainerStarted","Data":"0ed67d43fedb1940366516458946994daf2d73573871c6c0d3180cd273fb814c"} Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.934944 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn" Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.946515 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt" event={"ID":"8cff73ad-f4f3-47a7-8ba1-985614f757a3","Type":"ContainerStarted","Data":"b9a6f8541e4b6620cbcd94c4812346f6431131c3f34e8ef672f7be4b3d5b6ea6"} Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.946691 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt" Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.949430 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48" event={"ID":"4f7d1bce-8d2f-4f79-9b65-067b14abc6ca","Type":"ContainerStarted","Data":"6d9c110eea9a82e080ed5970c95986a7dce6e40a3fef1b1103814db2cae0f99d"} Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.951109 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48" Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.959669 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px" event={"ID":"741037fd-f9c3-4461-8f0c-d94f1f869ec0","Type":"ContainerStarted","Data":"9c75f73b8971149e66ed5068fcb89140ec5d7c742bb037fac3fc86f0ae8544ff"} Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.962832 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8" event={"ID":"9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77","Type":"ContainerStarted","Data":"29b5adce5b2adac35191e867590488fdc1fbb170e5a426e82dbf8a3820d98e31"} Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.963706 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8" Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.964154 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk" podStartSLOduration=4.911974943 podStartE2EDuration="38.964141131s" podCreationTimestamp="2025-12-02 20:27:46 +0000 UTC" firstStartedPulling="2025-12-02 20:27:48.826661889 +0000 UTC m=+951.830037423" lastFinishedPulling="2025-12-02 20:28:22.878828077 +0000 UTC m=+985.882203611" observedRunningTime="2025-12-02 20:28:24.963426523 +0000 UTC m=+987.966802057" watchObservedRunningTime="2025-12-02 20:28:24.964141131 +0000 UTC m=+987.967516665" Dec 02 20:28:24 crc kubenswrapper[4796]: I1202 20:28:24.992197 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48" podStartSLOduration=3.682689154 podStartE2EDuration="37.992174481s" podCreationTimestamp="2025-12-02 20:27:47 +0000 UTC" firstStartedPulling="2025-12-02 20:27:49.035908327 +0000 UTC m=+952.039283861" lastFinishedPulling="2025-12-02 20:28:23.345393654 +0000 UTC m=+986.348769188" observedRunningTime="2025-12-02 20:28:24.991417892 +0000 UTC m=+987.994793446" watchObservedRunningTime="2025-12-02 20:28:24.992174481 +0000 UTC m=+987.995550005" Dec 02 20:28:25 crc kubenswrapper[4796]: I1202 20:28:25.020450 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn" podStartSLOduration=3.218851859 podStartE2EDuration="39.020420396s" podCreationTimestamp="2025-12-02 20:27:46 +0000 UTC" firstStartedPulling="2025-12-02 20:27:48.621438197 +0000 UTC m=+951.624813731" lastFinishedPulling="2025-12-02 20:28:24.423006734 +0000 UTC m=+987.426382268" observedRunningTime="2025-12-02 20:28:25.010115205 +0000 UTC m=+988.013490739" watchObservedRunningTime="2025-12-02 20:28:25.020420396 +0000 UTC m=+988.023795930" Dec 02 20:28:25 crc kubenswrapper[4796]: I1202 20:28:25.036026 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt" podStartSLOduration=4.361402308 podStartE2EDuration="39.035997213s" podCreationTimestamp="2025-12-02 20:27:46 +0000 UTC" firstStartedPulling="2025-12-02 20:27:48.321611729 +0000 UTC m=+951.324987263" lastFinishedPulling="2025-12-02 20:28:22.996206634 +0000 UTC m=+985.999582168" observedRunningTime="2025-12-02 20:28:25.0280486 +0000 UTC m=+988.031424164" watchObservedRunningTime="2025-12-02 20:28:25.035997213 +0000 UTC m=+988.039372747" Dec 02 20:28:25 crc kubenswrapper[4796]: I1202 20:28:25.066700 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8" podStartSLOduration=4.486030715 podStartE2EDuration="39.066664357s" podCreationTimestamp="2025-12-02 20:27:46 +0000 UTC" firstStartedPulling="2025-12-02 20:27:48.640111558 +0000 UTC m=+951.643487092" lastFinishedPulling="2025-12-02 20:28:23.2207452 +0000 UTC m=+986.224120734" observedRunningTime="2025-12-02 20:28:25.05731014 +0000 UTC m=+988.060685664" watchObservedRunningTime="2025-12-02 20:28:25.066664357 +0000 UTC m=+988.070039891" Dec 02 20:28:25 crc kubenswrapper[4796]: I1202 20:28:25.091812 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px" podStartSLOduration=4.799196108 podStartE2EDuration="39.091786387s" podCreationTimestamp="2025-12-02 20:27:46 +0000 UTC" firstStartedPulling="2025-12-02 20:27:48.8064742 +0000 UTC m=+951.809849734" lastFinishedPulling="2025-12-02 20:28:23.099064479 +0000 UTC m=+986.102440013" observedRunningTime="2025-12-02 20:28:25.088986939 +0000 UTC m=+988.092362473" watchObservedRunningTime="2025-12-02 20:28:25.091786387 +0000 UTC m=+988.095161921" Dec 02 20:28:25 crc kubenswrapper[4796]: I1202 20:28:25.971039 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px" Dec 02 20:28:28 crc kubenswrapper[4796]: I1202 20:28:28.063168 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" Dec 02 20:28:29 crc kubenswrapper[4796]: I1202 20:28:29.966166 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-754dcb5d59-v9gsq" Dec 02 20:28:33 crc kubenswrapper[4796]: I1202 20:28:33.182325 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-n5m58" Dec 02 20:28:33 crc kubenswrapper[4796]: I1202 20:28:33.793768 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9" Dec 02 20:28:37 crc kubenswrapper[4796]: I1202 20:28:37.131196 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-grxnn" Dec 02 20:28:37 crc kubenswrapper[4796]: I1202 20:28:37.158490 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-shlwt" Dec 02 20:28:37 crc kubenswrapper[4796]: I1202 20:28:37.166926 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-tsrx8" Dec 02 20:28:37 crc kubenswrapper[4796]: I1202 20:28:37.311536 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-qccgk" Dec 02 20:28:37 crc kubenswrapper[4796]: I1202 20:28:37.583854 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bv8px" Dec 02 20:28:37 crc kubenswrapper[4796]: I1202 20:28:37.933991 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-5zl48" Dec 02 20:28:43 crc kubenswrapper[4796]: I1202 20:28:43.851160 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7"] Dec 02 20:28:43 crc kubenswrapper[4796]: I1202 20:28:43.851962 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" podUID="fcd27a49-b8b6-4635-bd73-f7caa2d3785a" containerName="operator" containerID="cri-o://8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577" gracePeriod=10 Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.170138 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.182268 4796 generic.go:334] "Generic (PLEG): container finished" podID="fcd27a49-b8b6-4635-bd73-f7caa2d3785a" containerID="8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577" exitCode=0 Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.182320 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" event={"ID":"fcd27a49-b8b6-4635-bd73-f7caa2d3785a","Type":"ContainerDied","Data":"8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577"} Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.182355 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" event={"ID":"fcd27a49-b8b6-4635-bd73-f7caa2d3785a","Type":"ContainerDied","Data":"b0bdabdb8924e0e6ce2e21ee05091fdf705b0ae1779c896a73d6df60d5a98960"} Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.182382 4796 scope.go:117] "RemoveContainer" containerID="8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577" Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.182488 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7" Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.202811 4796 scope.go:117] "RemoveContainer" containerID="8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577" Dec 02 20:28:46 crc kubenswrapper[4796]: E1202 20:28:46.203360 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577\": container with ID starting with 8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577 not found: ID does not exist" containerID="8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577" Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.203400 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577"} err="failed to get container status \"8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577\": rpc error: code = NotFound desc = could not find container \"8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577\": container with ID starting with 8167ea65004b8cf421a89ca4cf139176169d272e2173cfc5b37e72d66d195577 not found: ID does not exist" Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.327754 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vrwb\" (UniqueName: \"kubernetes.io/projected/fcd27a49-b8b6-4635-bd73-f7caa2d3785a-kube-api-access-5vrwb\") pod \"fcd27a49-b8b6-4635-bd73-f7caa2d3785a\" (UID: \"fcd27a49-b8b6-4635-bd73-f7caa2d3785a\") " Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.335934 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd27a49-b8b6-4635-bd73-f7caa2d3785a-kube-api-access-5vrwb" (OuterVolumeSpecName: "kube-api-access-5vrwb") pod "fcd27a49-b8b6-4635-bd73-f7caa2d3785a" (UID: "fcd27a49-b8b6-4635-bd73-f7caa2d3785a"). InnerVolumeSpecName "kube-api-access-5vrwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.429539 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vrwb\" (UniqueName: \"kubernetes.io/projected/fcd27a49-b8b6-4635-bd73-f7caa2d3785a-kube-api-access-5vrwb\") on node \"crc\" DevicePath \"\"" Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.521189 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7"] Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.529120 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7f7b957f48-xsfs7"] Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.696174 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk"] Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.696697 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" podUID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" containerName="manager" containerID="cri-o://ed7ceec4dc6ad9fa5b7cc84e6de794343b7d522dc3537de8792b5475f016ffd5" gracePeriod=10 Dec 02 20:28:46 crc kubenswrapper[4796]: I1202 20:28:46.696848 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" podUID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" containerName="kube-rbac-proxy" containerID="cri-o://5021094da3f08a8f13e263e3c6bd92501c79b859c5434ca22038eeb2fd948e55" gracePeriod=10 Dec 02 20:28:47 crc kubenswrapper[4796]: I1202 20:28:47.198165 4796 generic.go:334] "Generic (PLEG): container finished" podID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" containerID="5021094da3f08a8f13e263e3c6bd92501c79b859c5434ca22038eeb2fd948e55" exitCode=0 Dec 02 20:28:47 crc kubenswrapper[4796]: I1202 20:28:47.198719 4796 generic.go:334] "Generic (PLEG): container finished" podID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" containerID="ed7ceec4dc6ad9fa5b7cc84e6de794343b7d522dc3537de8792b5475f016ffd5" exitCode=0 Dec 02 20:28:47 crc kubenswrapper[4796]: I1202 20:28:47.198272 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" event={"ID":"718dd3d4-158b-4dcd-912e-51e3cfa993e7","Type":"ContainerDied","Data":"5021094da3f08a8f13e263e3c6bd92501c79b859c5434ca22038eeb2fd948e55"} Dec 02 20:28:47 crc kubenswrapper[4796]: I1202 20:28:47.198790 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" event={"ID":"718dd3d4-158b-4dcd-912e-51e3cfa993e7","Type":"ContainerDied","Data":"ed7ceec4dc6ad9fa5b7cc84e6de794343b7d522dc3537de8792b5475f016ffd5"} Dec 02 20:28:47 crc kubenswrapper[4796]: I1202 20:28:47.198806 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" event={"ID":"718dd3d4-158b-4dcd-912e-51e3cfa993e7","Type":"ContainerDied","Data":"64d49430c6ae3fdbbb22722440f515420f437f0b40cc1713f11316c0cf4874e0"} Dec 02 20:28:47 crc kubenswrapper[4796]: I1202 20:28:47.198820 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64d49430c6ae3fdbbb22722440f515420f437f0b40cc1713f11316c0cf4874e0" Dec 02 20:28:47 crc kubenswrapper[4796]: I1202 20:28:47.207142 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" Dec 02 20:28:47 crc kubenswrapper[4796]: I1202 20:28:47.282480 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd27a49-b8b6-4635-bd73-f7caa2d3785a" path="/var/lib/kubelet/pods/fcd27a49-b8b6-4635-bd73-f7caa2d3785a/volumes" Dec 02 20:28:47 crc kubenswrapper[4796]: I1202 20:28:47.343279 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw4g2\" (UniqueName: \"kubernetes.io/projected/718dd3d4-158b-4dcd-912e-51e3cfa993e7-kube-api-access-xw4g2\") pod \"718dd3d4-158b-4dcd-912e-51e3cfa993e7\" (UID: \"718dd3d4-158b-4dcd-912e-51e3cfa993e7\") " Dec 02 20:28:47 crc kubenswrapper[4796]: I1202 20:28:47.349701 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718dd3d4-158b-4dcd-912e-51e3cfa993e7-kube-api-access-xw4g2" (OuterVolumeSpecName: "kube-api-access-xw4g2") pod "718dd3d4-158b-4dcd-912e-51e3cfa993e7" (UID: "718dd3d4-158b-4dcd-912e-51e3cfa993e7"). InnerVolumeSpecName "kube-api-access-xw4g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:28:47 crc kubenswrapper[4796]: I1202 20:28:47.445224 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw4g2\" (UniqueName: \"kubernetes.io/projected/718dd3d4-158b-4dcd-912e-51e3cfa993e7-kube-api-access-xw4g2\") on node \"crc\" DevicePath \"\"" Dec 02 20:28:48 crc kubenswrapper[4796]: I1202 20:28:48.210322 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk" Dec 02 20:28:48 crc kubenswrapper[4796]: I1202 20:28:48.256361 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk"] Dec 02 20:28:48 crc kubenswrapper[4796]: I1202 20:28:48.271010 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76f4f8cb8b-nfzhk"] Dec 02 20:28:49 crc kubenswrapper[4796]: I1202 20:28:49.275992 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" path="/var/lib/kubelet/pods/718dd3d4-158b-4dcd-912e-51e3cfa993e7/volumes" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.781644 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-wvxkv"] Dec 02 20:28:51 crc kubenswrapper[4796]: E1202 20:28:51.782494 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" containerName="registry-server" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.782516 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" containerName="registry-server" Dec 02 20:28:51 crc kubenswrapper[4796]: E1202 20:28:51.782556 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" containerName="manager" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.782570 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" containerName="manager" Dec 02 20:28:51 crc kubenswrapper[4796]: E1202 20:28:51.784066 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" containerName="kube-rbac-proxy" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.784086 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" containerName="kube-rbac-proxy" Dec 02 20:28:51 crc kubenswrapper[4796]: E1202 20:28:51.784109 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" containerName="extract-utilities" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.784119 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" containerName="extract-utilities" Dec 02 20:28:51 crc kubenswrapper[4796]: E1202 20:28:51.784145 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" containerName="extract-content" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.784154 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" containerName="extract-content" Dec 02 20:28:51 crc kubenswrapper[4796]: E1202 20:28:51.784167 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd27a49-b8b6-4635-bd73-f7caa2d3785a" containerName="operator" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.784175 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd27a49-b8b6-4635-bd73-f7caa2d3785a" containerName="operator" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.784549 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd27a49-b8b6-4635-bd73-f7caa2d3785a" containerName="operator" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.784758 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" containerName="manager" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.784960 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25fbf8d-dfdf-4c33-9ff6-6df74451d0ef" containerName="registry-server" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.784980 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="718dd3d4-158b-4dcd-912e-51e3cfa993e7" containerName="kube-rbac-proxy" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.785722 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-wvxkv" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.801661 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-wvxkv"] Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.802867 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-index-dockercfg-pdsl6" Dec 02 20:28:51 crc kubenswrapper[4796]: I1202 20:28:51.932756 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7m7g\" (UniqueName: \"kubernetes.io/projected/a651afe9-96cd-4eb4-89c5-8a2680d624e6-kube-api-access-w7m7g\") pod \"watcher-operator-index-wvxkv\" (UID: \"a651afe9-96cd-4eb4-89c5-8a2680d624e6\") " pod="openstack-operators/watcher-operator-index-wvxkv" Dec 02 20:28:52 crc kubenswrapper[4796]: I1202 20:28:52.034774 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7m7g\" (UniqueName: \"kubernetes.io/projected/a651afe9-96cd-4eb4-89c5-8a2680d624e6-kube-api-access-w7m7g\") pod \"watcher-operator-index-wvxkv\" (UID: \"a651afe9-96cd-4eb4-89c5-8a2680d624e6\") " pod="openstack-operators/watcher-operator-index-wvxkv" Dec 02 20:28:52 crc kubenswrapper[4796]: I1202 20:28:52.080351 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7m7g\" (UniqueName: \"kubernetes.io/projected/a651afe9-96cd-4eb4-89c5-8a2680d624e6-kube-api-access-w7m7g\") pod \"watcher-operator-index-wvxkv\" (UID: \"a651afe9-96cd-4eb4-89c5-8a2680d624e6\") " pod="openstack-operators/watcher-operator-index-wvxkv" Dec 02 20:28:52 crc kubenswrapper[4796]: I1202 20:28:52.151767 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-wvxkv" Dec 02 20:28:52 crc kubenswrapper[4796]: I1202 20:28:52.667296 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-wvxkv"] Dec 02 20:28:53 crc kubenswrapper[4796]: I1202 20:28:53.272849 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-wvxkv" event={"ID":"a651afe9-96cd-4eb4-89c5-8a2680d624e6","Type":"ContainerStarted","Data":"5b9aa4ebb1e07e0bc36b48d086401a244642713872235edf8eea1cb807c696f5"} Dec 02 20:28:53 crc kubenswrapper[4796]: I1202 20:28:53.273119 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-wvxkv" event={"ID":"a651afe9-96cd-4eb4-89c5-8a2680d624e6","Type":"ContainerStarted","Data":"475032fc40a6cfb54501925c9634ea3681a3e99ca36ec824941f980c3d1670df"} Dec 02 20:28:53 crc kubenswrapper[4796]: I1202 20:28:53.296940 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-wvxkv" podStartSLOduration=2.064923057 podStartE2EDuration="2.296913045s" podCreationTimestamp="2025-12-02 20:28:51 +0000 UTC" firstStartedPulling="2025-12-02 20:28:52.67793168 +0000 UTC m=+1015.681307214" lastFinishedPulling="2025-12-02 20:28:52.909921668 +0000 UTC m=+1015.913297202" observedRunningTime="2025-12-02 20:28:53.291186996 +0000 UTC m=+1016.294562540" watchObservedRunningTime="2025-12-02 20:28:53.296913045 +0000 UTC m=+1016.300288579" Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.169858 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-wvxkv"] Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.171391 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-index-wvxkv" podUID="a651afe9-96cd-4eb4-89c5-8a2680d624e6" containerName="registry-server" containerID="cri-o://5b9aa4ebb1e07e0bc36b48d086401a244642713872235edf8eea1cb807c696f5" gracePeriod=2 Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.300606 4796 generic.go:334] "Generic (PLEG): container finished" podID="a651afe9-96cd-4eb4-89c5-8a2680d624e6" containerID="5b9aa4ebb1e07e0bc36b48d086401a244642713872235edf8eea1cb807c696f5" exitCode=0 Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.300801 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-wvxkv" event={"ID":"a651afe9-96cd-4eb4-89c5-8a2680d624e6","Type":"ContainerDied","Data":"5b9aa4ebb1e07e0bc36b48d086401a244642713872235edf8eea1cb807c696f5"} Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.645810 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-wvxkv" Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.812490 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7m7g\" (UniqueName: \"kubernetes.io/projected/a651afe9-96cd-4eb4-89c5-8a2680d624e6-kube-api-access-w7m7g\") pod \"a651afe9-96cd-4eb4-89c5-8a2680d624e6\" (UID: \"a651afe9-96cd-4eb4-89c5-8a2680d624e6\") " Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.819827 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a651afe9-96cd-4eb4-89c5-8a2680d624e6-kube-api-access-w7m7g" (OuterVolumeSpecName: "kube-api-access-w7m7g") pod "a651afe9-96cd-4eb4-89c5-8a2680d624e6" (UID: "a651afe9-96cd-4eb4-89c5-8a2680d624e6"). InnerVolumeSpecName "kube-api-access-w7m7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.914749 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7m7g\" (UniqueName: \"kubernetes.io/projected/a651afe9-96cd-4eb4-89c5-8a2680d624e6-kube-api-access-w7m7g\") on node \"crc\" DevicePath \"\"" Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.976089 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-xhnj2"] Dec 02 20:28:56 crc kubenswrapper[4796]: E1202 20:28:56.976769 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a651afe9-96cd-4eb4-89c5-8a2680d624e6" containerName="registry-server" Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.976789 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a651afe9-96cd-4eb4-89c5-8a2680d624e6" containerName="registry-server" Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.976970 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a651afe9-96cd-4eb4-89c5-8a2680d624e6" containerName="registry-server" Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.977628 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-xhnj2" Dec 02 20:28:56 crc kubenswrapper[4796]: I1202 20:28:56.985567 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-xhnj2"] Dec 02 20:28:57 crc kubenswrapper[4796]: I1202 20:28:57.119748 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2tr4\" (UniqueName: \"kubernetes.io/projected/9388a7aa-ece0-4878-8d63-c095adf67256-kube-api-access-v2tr4\") pod \"watcher-operator-index-xhnj2\" (UID: \"9388a7aa-ece0-4878-8d63-c095adf67256\") " pod="openstack-operators/watcher-operator-index-xhnj2" Dec 02 20:28:57 crc kubenswrapper[4796]: I1202 20:28:57.222067 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2tr4\" (UniqueName: \"kubernetes.io/projected/9388a7aa-ece0-4878-8d63-c095adf67256-kube-api-access-v2tr4\") pod \"watcher-operator-index-xhnj2\" (UID: \"9388a7aa-ece0-4878-8d63-c095adf67256\") " pod="openstack-operators/watcher-operator-index-xhnj2" Dec 02 20:28:57 crc kubenswrapper[4796]: I1202 20:28:57.245060 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2tr4\" (UniqueName: \"kubernetes.io/projected/9388a7aa-ece0-4878-8d63-c095adf67256-kube-api-access-v2tr4\") pod \"watcher-operator-index-xhnj2\" (UID: \"9388a7aa-ece0-4878-8d63-c095adf67256\") " pod="openstack-operators/watcher-operator-index-xhnj2" Dec 02 20:28:57 crc kubenswrapper[4796]: I1202 20:28:57.304526 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-xhnj2" Dec 02 20:28:57 crc kubenswrapper[4796]: I1202 20:28:57.313411 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-wvxkv" event={"ID":"a651afe9-96cd-4eb4-89c5-8a2680d624e6","Type":"ContainerDied","Data":"475032fc40a6cfb54501925c9634ea3681a3e99ca36ec824941f980c3d1670df"} Dec 02 20:28:57 crc kubenswrapper[4796]: I1202 20:28:57.313708 4796 scope.go:117] "RemoveContainer" containerID="5b9aa4ebb1e07e0bc36b48d086401a244642713872235edf8eea1cb807c696f5" Dec 02 20:28:57 crc kubenswrapper[4796]: I1202 20:28:57.313994 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-wvxkv" Dec 02 20:28:57 crc kubenswrapper[4796]: I1202 20:28:57.349895 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-wvxkv"] Dec 02 20:28:57 crc kubenswrapper[4796]: I1202 20:28:57.355779 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-index-wvxkv"] Dec 02 20:28:57 crc kubenswrapper[4796]: I1202 20:28:57.849476 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-xhnj2"] Dec 02 20:28:57 crc kubenswrapper[4796]: W1202 20:28:57.853579 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9388a7aa_ece0_4878_8d63_c095adf67256.slice/crio-3765d778a105ed22b18ba4849b48e7f2063f5a31552c4abd5e56300adf42dfe2 WatchSource:0}: Error finding container 3765d778a105ed22b18ba4849b48e7f2063f5a31552c4abd5e56300adf42dfe2: Status 404 returned error can't find the container with id 3765d778a105ed22b18ba4849b48e7f2063f5a31552c4abd5e56300adf42dfe2 Dec 02 20:28:58 crc kubenswrapper[4796]: I1202 20:28:58.324562 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-xhnj2" event={"ID":"9388a7aa-ece0-4878-8d63-c095adf67256","Type":"ContainerStarted","Data":"5291716915c5674635c1f64d5778a57230779c8b16a25338af468f3b147e9d1d"} Dec 02 20:28:58 crc kubenswrapper[4796]: I1202 20:28:58.324607 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-xhnj2" event={"ID":"9388a7aa-ece0-4878-8d63-c095adf67256","Type":"ContainerStarted","Data":"3765d778a105ed22b18ba4849b48e7f2063f5a31552c4abd5e56300adf42dfe2"} Dec 02 20:28:58 crc kubenswrapper[4796]: I1202 20:28:58.348672 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-xhnj2" podStartSLOduration=2.155566121 podStartE2EDuration="2.348648234s" podCreationTimestamp="2025-12-02 20:28:56 +0000 UTC" firstStartedPulling="2025-12-02 20:28:57.858756451 +0000 UTC m=+1020.862131985" lastFinishedPulling="2025-12-02 20:28:58.051838554 +0000 UTC m=+1021.055214098" observedRunningTime="2025-12-02 20:28:58.340030135 +0000 UTC m=+1021.343405669" watchObservedRunningTime="2025-12-02 20:28:58.348648234 +0000 UTC m=+1021.352023788" Dec 02 20:28:59 crc kubenswrapper[4796]: I1202 20:28:59.284099 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a651afe9-96cd-4eb4-89c5-8a2680d624e6" path="/var/lib/kubelet/pods/a651afe9-96cd-4eb4-89c5-8a2680d624e6/volumes" Dec 02 20:29:07 crc kubenswrapper[4796]: I1202 20:29:07.305522 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-index-xhnj2" Dec 02 20:29:07 crc kubenswrapper[4796]: I1202 20:29:07.307888 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/watcher-operator-index-xhnj2" Dec 02 20:29:07 crc kubenswrapper[4796]: I1202 20:29:07.341711 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/watcher-operator-index-xhnj2" Dec 02 20:29:07 crc kubenswrapper[4796]: I1202 20:29:07.461181 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-index-xhnj2" Dec 02 20:29:10 crc kubenswrapper[4796]: I1202 20:29:10.867679 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln"] Dec 02 20:29:10 crc kubenswrapper[4796]: I1202 20:29:10.870286 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:10 crc kubenswrapper[4796]: I1202 20:29:10.874021 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-dp882" Dec 02 20:29:10 crc kubenswrapper[4796]: I1202 20:29:10.886995 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln"] Dec 02 20:29:10 crc kubenswrapper[4796]: I1202 20:29:10.946642 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2lh7\" (UniqueName: \"kubernetes.io/projected/1f1197d5-f1e2-422c-93ba-2150f58ea971-kube-api-access-w2lh7\") pod \"fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln\" (UID: \"1f1197d5-f1e2-422c-93ba-2150f58ea971\") " pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:10 crc kubenswrapper[4796]: I1202 20:29:10.947003 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f1197d5-f1e2-422c-93ba-2150f58ea971-bundle\") pod \"fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln\" (UID: \"1f1197d5-f1e2-422c-93ba-2150f58ea971\") " pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:10 crc kubenswrapper[4796]: I1202 20:29:10.947153 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f1197d5-f1e2-422c-93ba-2150f58ea971-util\") pod \"fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln\" (UID: \"1f1197d5-f1e2-422c-93ba-2150f58ea971\") " pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:11 crc kubenswrapper[4796]: I1202 20:29:11.048763 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f1197d5-f1e2-422c-93ba-2150f58ea971-util\") pod \"fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln\" (UID: \"1f1197d5-f1e2-422c-93ba-2150f58ea971\") " pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:11 crc kubenswrapper[4796]: I1202 20:29:11.049144 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2lh7\" (UniqueName: \"kubernetes.io/projected/1f1197d5-f1e2-422c-93ba-2150f58ea971-kube-api-access-w2lh7\") pod \"fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln\" (UID: \"1f1197d5-f1e2-422c-93ba-2150f58ea971\") " pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:11 crc kubenswrapper[4796]: I1202 20:29:11.049173 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f1197d5-f1e2-422c-93ba-2150f58ea971-bundle\") pod \"fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln\" (UID: \"1f1197d5-f1e2-422c-93ba-2150f58ea971\") " pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:11 crc kubenswrapper[4796]: I1202 20:29:11.049876 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f1197d5-f1e2-422c-93ba-2150f58ea971-util\") pod \"fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln\" (UID: \"1f1197d5-f1e2-422c-93ba-2150f58ea971\") " pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:11 crc kubenswrapper[4796]: I1202 20:29:11.050003 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f1197d5-f1e2-422c-93ba-2150f58ea971-bundle\") pod \"fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln\" (UID: \"1f1197d5-f1e2-422c-93ba-2150f58ea971\") " pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:11 crc kubenswrapper[4796]: I1202 20:29:11.072514 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2lh7\" (UniqueName: \"kubernetes.io/projected/1f1197d5-f1e2-422c-93ba-2150f58ea971-kube-api-access-w2lh7\") pod \"fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln\" (UID: \"1f1197d5-f1e2-422c-93ba-2150f58ea971\") " pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:11 crc kubenswrapper[4796]: I1202 20:29:11.213120 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:11 crc kubenswrapper[4796]: I1202 20:29:11.679538 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln"] Dec 02 20:29:12 crc kubenswrapper[4796]: I1202 20:29:12.463199 4796 generic.go:334] "Generic (PLEG): container finished" podID="1f1197d5-f1e2-422c-93ba-2150f58ea971" containerID="07f141d7310f24bfda501ace75148162ca9c12b43112a1ac5c6b69580274c4be" exitCode=0 Dec 02 20:29:12 crc kubenswrapper[4796]: I1202 20:29:12.463725 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" event={"ID":"1f1197d5-f1e2-422c-93ba-2150f58ea971","Type":"ContainerDied","Data":"07f141d7310f24bfda501ace75148162ca9c12b43112a1ac5c6b69580274c4be"} Dec 02 20:29:12 crc kubenswrapper[4796]: I1202 20:29:12.463762 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" event={"ID":"1f1197d5-f1e2-422c-93ba-2150f58ea971","Type":"ContainerStarted","Data":"f25706d041e9f17fd23c01d4164acc40325d144d02d953c60c6278d25ccb948b"} Dec 02 20:29:13 crc kubenswrapper[4796]: I1202 20:29:13.480726 4796 generic.go:334] "Generic (PLEG): container finished" podID="1f1197d5-f1e2-422c-93ba-2150f58ea971" containerID="6196cbbd929dafd643f158df17f0da3e3ac6f74e36ac1a86ff4b016892dd2105" exitCode=0 Dec 02 20:29:13 crc kubenswrapper[4796]: I1202 20:29:13.480812 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" event={"ID":"1f1197d5-f1e2-422c-93ba-2150f58ea971","Type":"ContainerDied","Data":"6196cbbd929dafd643f158df17f0da3e3ac6f74e36ac1a86ff4b016892dd2105"} Dec 02 20:29:14 crc kubenswrapper[4796]: I1202 20:29:14.506098 4796 generic.go:334] "Generic (PLEG): container finished" podID="1f1197d5-f1e2-422c-93ba-2150f58ea971" containerID="21bf8d55e2b1431a093c76ad74f7881b3d58e8f47c5736599707bab79dd686e1" exitCode=0 Dec 02 20:29:14 crc kubenswrapper[4796]: I1202 20:29:14.506147 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" event={"ID":"1f1197d5-f1e2-422c-93ba-2150f58ea971","Type":"ContainerDied","Data":"21bf8d55e2b1431a093c76ad74f7881b3d58e8f47c5736599707bab79dd686e1"} Dec 02 20:29:15 crc kubenswrapper[4796]: I1202 20:29:15.888870 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:15 crc kubenswrapper[4796]: I1202 20:29:15.946612 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f1197d5-f1e2-422c-93ba-2150f58ea971-util\") pod \"1f1197d5-f1e2-422c-93ba-2150f58ea971\" (UID: \"1f1197d5-f1e2-422c-93ba-2150f58ea971\") " Dec 02 20:29:15 crc kubenswrapper[4796]: I1202 20:29:15.946729 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2lh7\" (UniqueName: \"kubernetes.io/projected/1f1197d5-f1e2-422c-93ba-2150f58ea971-kube-api-access-w2lh7\") pod \"1f1197d5-f1e2-422c-93ba-2150f58ea971\" (UID: \"1f1197d5-f1e2-422c-93ba-2150f58ea971\") " Dec 02 20:29:15 crc kubenswrapper[4796]: I1202 20:29:15.946840 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f1197d5-f1e2-422c-93ba-2150f58ea971-bundle\") pod \"1f1197d5-f1e2-422c-93ba-2150f58ea971\" (UID: \"1f1197d5-f1e2-422c-93ba-2150f58ea971\") " Dec 02 20:29:15 crc kubenswrapper[4796]: I1202 20:29:15.947968 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1197d5-f1e2-422c-93ba-2150f58ea971-bundle" (OuterVolumeSpecName: "bundle") pod "1f1197d5-f1e2-422c-93ba-2150f58ea971" (UID: "1f1197d5-f1e2-422c-93ba-2150f58ea971"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:29:15 crc kubenswrapper[4796]: I1202 20:29:15.954594 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f1197d5-f1e2-422c-93ba-2150f58ea971-kube-api-access-w2lh7" (OuterVolumeSpecName: "kube-api-access-w2lh7") pod "1f1197d5-f1e2-422c-93ba-2150f58ea971" (UID: "1f1197d5-f1e2-422c-93ba-2150f58ea971"). InnerVolumeSpecName "kube-api-access-w2lh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:29:15 crc kubenswrapper[4796]: I1202 20:29:15.981025 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1197d5-f1e2-422c-93ba-2150f58ea971-util" (OuterVolumeSpecName: "util") pod "1f1197d5-f1e2-422c-93ba-2150f58ea971" (UID: "1f1197d5-f1e2-422c-93ba-2150f58ea971"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:29:16 crc kubenswrapper[4796]: I1202 20:29:16.049040 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f1197d5-f1e2-422c-93ba-2150f58ea971-util\") on node \"crc\" DevicePath \"\"" Dec 02 20:29:16 crc kubenswrapper[4796]: I1202 20:29:16.049084 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2lh7\" (UniqueName: \"kubernetes.io/projected/1f1197d5-f1e2-422c-93ba-2150f58ea971-kube-api-access-w2lh7\") on node \"crc\" DevicePath \"\"" Dec 02 20:29:16 crc kubenswrapper[4796]: I1202 20:29:16.049097 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f1197d5-f1e2-422c-93ba-2150f58ea971-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:29:16 crc kubenswrapper[4796]: I1202 20:29:16.525435 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" event={"ID":"1f1197d5-f1e2-422c-93ba-2150f58ea971","Type":"ContainerDied","Data":"f25706d041e9f17fd23c01d4164acc40325d144d02d953c60c6278d25ccb948b"} Dec 02 20:29:16 crc kubenswrapper[4796]: I1202 20:29:16.525481 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f25706d041e9f17fd23c01d4164acc40325d144d02d953c60c6278d25ccb948b" Dec 02 20:29:16 crc kubenswrapper[4796]: I1202 20:29:16.525567 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.678914 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk"] Dec 02 20:29:20 crc kubenswrapper[4796]: E1202 20:29:20.679607 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1197d5-f1e2-422c-93ba-2150f58ea971" containerName="extract" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.679621 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1197d5-f1e2-422c-93ba-2150f58ea971" containerName="extract" Dec 02 20:29:20 crc kubenswrapper[4796]: E1202 20:29:20.679650 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1197d5-f1e2-422c-93ba-2150f58ea971" containerName="pull" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.679658 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1197d5-f1e2-422c-93ba-2150f58ea971" containerName="pull" Dec 02 20:29:20 crc kubenswrapper[4796]: E1202 20:29:20.679682 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1197d5-f1e2-422c-93ba-2150f58ea971" containerName="util" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.679689 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1197d5-f1e2-422c-93ba-2150f58ea971" containerName="util" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.679865 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f1197d5-f1e2-422c-93ba-2150f58ea971" containerName="extract" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.680492 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.683026 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7b4x2" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.683374 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-service-cert" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.698769 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk"] Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.719998 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q95h\" (UniqueName: \"kubernetes.io/projected/b9622415-11a5-4c21-adf7-521caf43caa1-kube-api-access-5q95h\") pod \"watcher-operator-controller-manager-5bd4b77f66-nk2fk\" (UID: \"b9622415-11a5-4c21-adf7-521caf43caa1\") " pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.720077 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9622415-11a5-4c21-adf7-521caf43caa1-webhook-cert\") pod \"watcher-operator-controller-manager-5bd4b77f66-nk2fk\" (UID: \"b9622415-11a5-4c21-adf7-521caf43caa1\") " pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.720105 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9622415-11a5-4c21-adf7-521caf43caa1-apiservice-cert\") pod \"watcher-operator-controller-manager-5bd4b77f66-nk2fk\" (UID: \"b9622415-11a5-4c21-adf7-521caf43caa1\") " pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.821153 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9622415-11a5-4c21-adf7-521caf43caa1-webhook-cert\") pod \"watcher-operator-controller-manager-5bd4b77f66-nk2fk\" (UID: \"b9622415-11a5-4c21-adf7-521caf43caa1\") " pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.821226 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9622415-11a5-4c21-adf7-521caf43caa1-apiservice-cert\") pod \"watcher-operator-controller-manager-5bd4b77f66-nk2fk\" (UID: \"b9622415-11a5-4c21-adf7-521caf43caa1\") " pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.821378 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q95h\" (UniqueName: \"kubernetes.io/projected/b9622415-11a5-4c21-adf7-521caf43caa1-kube-api-access-5q95h\") pod \"watcher-operator-controller-manager-5bd4b77f66-nk2fk\" (UID: \"b9622415-11a5-4c21-adf7-521caf43caa1\") " pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.833208 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9622415-11a5-4c21-adf7-521caf43caa1-webhook-cert\") pod \"watcher-operator-controller-manager-5bd4b77f66-nk2fk\" (UID: \"b9622415-11a5-4c21-adf7-521caf43caa1\") " pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.833288 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9622415-11a5-4c21-adf7-521caf43caa1-apiservice-cert\") pod \"watcher-operator-controller-manager-5bd4b77f66-nk2fk\" (UID: \"b9622415-11a5-4c21-adf7-521caf43caa1\") " pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.841859 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q95h\" (UniqueName: \"kubernetes.io/projected/b9622415-11a5-4c21-adf7-521caf43caa1-kube-api-access-5q95h\") pod \"watcher-operator-controller-manager-5bd4b77f66-nk2fk\" (UID: \"b9622415-11a5-4c21-adf7-521caf43caa1\") " pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:20 crc kubenswrapper[4796]: I1202 20:29:20.997937 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:21 crc kubenswrapper[4796]: W1202 20:29:21.487910 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9622415_11a5_4c21_adf7_521caf43caa1.slice/crio-3c46b8ead003faeb747421f63b093d137605cd9170a47c122ea6407fe4909917 WatchSource:0}: Error finding container 3c46b8ead003faeb747421f63b093d137605cd9170a47c122ea6407fe4909917: Status 404 returned error can't find the container with id 3c46b8ead003faeb747421f63b093d137605cd9170a47c122ea6407fe4909917 Dec 02 20:29:21 crc kubenswrapper[4796]: I1202 20:29:21.488671 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk"] Dec 02 20:29:21 crc kubenswrapper[4796]: I1202 20:29:21.570038 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" event={"ID":"b9622415-11a5-4c21-adf7-521caf43caa1","Type":"ContainerStarted","Data":"3c46b8ead003faeb747421f63b093d137605cd9170a47c122ea6407fe4909917"} Dec 02 20:29:22 crc kubenswrapper[4796]: I1202 20:29:22.579152 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" event={"ID":"b9622415-11a5-4c21-adf7-521caf43caa1","Type":"ContainerStarted","Data":"9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93"} Dec 02 20:29:22 crc kubenswrapper[4796]: I1202 20:29:22.579430 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:22 crc kubenswrapper[4796]: I1202 20:29:22.613085 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" podStartSLOduration=2.613064413 podStartE2EDuration="2.613064413s" podCreationTimestamp="2025-12-02 20:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:29:22.606187456 +0000 UTC m=+1045.609562990" watchObservedRunningTime="2025-12-02 20:29:22.613064413 +0000 UTC m=+1045.616439947" Dec 02 20:29:29 crc kubenswrapper[4796]: I1202 20:29:29.073735 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" podUID="a5e8b895-e788-44f4-8481-520f1cbd75c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 20:29:29 crc kubenswrapper[4796]: I1202 20:29:29.073765 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" podUID="a5e8b895-e788-44f4-8481-520f1cbd75c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 20:29:31 crc kubenswrapper[4796]: I1202 20:29:31.008455 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.200887 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4"] Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.202526 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.222878 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4"] Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.374722 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89390c72-5591-46ee-8b6d-71268195b622-apiservice-cert\") pod \"watcher-operator-controller-manager-655f76fc94-wn6n4\" (UID: \"89390c72-5591-46ee-8b6d-71268195b622\") " pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.375001 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89390c72-5591-46ee-8b6d-71268195b622-webhook-cert\") pod \"watcher-operator-controller-manager-655f76fc94-wn6n4\" (UID: \"89390c72-5591-46ee-8b6d-71268195b622\") " pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.375040 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp7hc\" (UniqueName: \"kubernetes.io/projected/89390c72-5591-46ee-8b6d-71268195b622-kube-api-access-qp7hc\") pod \"watcher-operator-controller-manager-655f76fc94-wn6n4\" (UID: \"89390c72-5591-46ee-8b6d-71268195b622\") " pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.476814 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89390c72-5591-46ee-8b6d-71268195b622-apiservice-cert\") pod \"watcher-operator-controller-manager-655f76fc94-wn6n4\" (UID: \"89390c72-5591-46ee-8b6d-71268195b622\") " pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.476893 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89390c72-5591-46ee-8b6d-71268195b622-webhook-cert\") pod \"watcher-operator-controller-manager-655f76fc94-wn6n4\" (UID: \"89390c72-5591-46ee-8b6d-71268195b622\") " pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.476931 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp7hc\" (UniqueName: \"kubernetes.io/projected/89390c72-5591-46ee-8b6d-71268195b622-kube-api-access-qp7hc\") pod \"watcher-operator-controller-manager-655f76fc94-wn6n4\" (UID: \"89390c72-5591-46ee-8b6d-71268195b622\") " pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.484148 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89390c72-5591-46ee-8b6d-71268195b622-apiservice-cert\") pod \"watcher-operator-controller-manager-655f76fc94-wn6n4\" (UID: \"89390c72-5591-46ee-8b6d-71268195b622\") " pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.487177 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89390c72-5591-46ee-8b6d-71268195b622-webhook-cert\") pod \"watcher-operator-controller-manager-655f76fc94-wn6n4\" (UID: \"89390c72-5591-46ee-8b6d-71268195b622\") " pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.504339 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp7hc\" (UniqueName: \"kubernetes.io/projected/89390c72-5591-46ee-8b6d-71268195b622-kube-api-access-qp7hc\") pod \"watcher-operator-controller-manager-655f76fc94-wn6n4\" (UID: \"89390c72-5591-46ee-8b6d-71268195b622\") " pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:32 crc kubenswrapper[4796]: I1202 20:29:32.583245 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:33 crc kubenswrapper[4796]: I1202 20:29:33.087454 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4"] Dec 02 20:29:33 crc kubenswrapper[4796]: I1202 20:29:33.682424 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" event={"ID":"89390c72-5591-46ee-8b6d-71268195b622","Type":"ContainerStarted","Data":"2cc15fea4cb4788d5ba4d3ba54d0d83b66dab443d8cd44f3ee87e49fd79e4ca2"} Dec 02 20:29:33 crc kubenswrapper[4796]: I1202 20:29:33.682763 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" event={"ID":"89390c72-5591-46ee-8b6d-71268195b622","Type":"ContainerStarted","Data":"762cc7c34001cd8eb63097d818756821a65c0ef5c22e45896f542a1c58b88f06"} Dec 02 20:29:33 crc kubenswrapper[4796]: I1202 20:29:33.682808 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:33 crc kubenswrapper[4796]: I1202 20:29:33.718711 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" podStartSLOduration=1.718680441 podStartE2EDuration="1.718680441s" podCreationTimestamp="2025-12-02 20:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:29:33.715929274 +0000 UTC m=+1056.719304808" watchObservedRunningTime="2025-12-02 20:29:33.718680441 +0000 UTC m=+1056.722055975" Dec 02 20:29:42 crc kubenswrapper[4796]: I1202 20:29:42.587890 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-655f76fc94-wn6n4" Dec 02 20:29:42 crc kubenswrapper[4796]: I1202 20:29:42.682442 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk"] Dec 02 20:29:42 crc kubenswrapper[4796]: I1202 20:29:42.682673 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" podUID="b9622415-11a5-4c21-adf7-521caf43caa1" containerName="manager" containerID="cri-o://9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93" gracePeriod=10 Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.769067 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.802191 4796 generic.go:334] "Generic (PLEG): container finished" podID="b9622415-11a5-4c21-adf7-521caf43caa1" containerID="9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93" exitCode=0 Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.802278 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" event={"ID":"b9622415-11a5-4c21-adf7-521caf43caa1","Type":"ContainerDied","Data":"9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93"} Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.803473 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" event={"ID":"b9622415-11a5-4c21-adf7-521caf43caa1","Type":"ContainerDied","Data":"3c46b8ead003faeb747421f63b093d137605cd9170a47c122ea6407fe4909917"} Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.803513 4796 scope.go:117] "RemoveContainer" containerID="9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93" Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.802382 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk" Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.830729 4796 scope.go:117] "RemoveContainer" containerID="9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93" Dec 02 20:29:43 crc kubenswrapper[4796]: E1202 20:29:43.831379 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93\": container with ID starting with 9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93 not found: ID does not exist" containerID="9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93" Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.831437 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93"} err="failed to get container status \"9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93\": rpc error: code = NotFound desc = could not find container \"9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93\": container with ID starting with 9fcc9b98e2f166236fba6ba24cf17760f4833125323b951950e9a0dfc3292f93 not found: ID does not exist" Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.861985 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9622415-11a5-4c21-adf7-521caf43caa1-webhook-cert\") pod \"b9622415-11a5-4c21-adf7-521caf43caa1\" (UID: \"b9622415-11a5-4c21-adf7-521caf43caa1\") " Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.862131 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q95h\" (UniqueName: \"kubernetes.io/projected/b9622415-11a5-4c21-adf7-521caf43caa1-kube-api-access-5q95h\") pod \"b9622415-11a5-4c21-adf7-521caf43caa1\" (UID: \"b9622415-11a5-4c21-adf7-521caf43caa1\") " Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.862178 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9622415-11a5-4c21-adf7-521caf43caa1-apiservice-cert\") pod \"b9622415-11a5-4c21-adf7-521caf43caa1\" (UID: \"b9622415-11a5-4c21-adf7-521caf43caa1\") " Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.870301 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9622415-11a5-4c21-adf7-521caf43caa1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b9622415-11a5-4c21-adf7-521caf43caa1" (UID: "b9622415-11a5-4c21-adf7-521caf43caa1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.871548 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9622415-11a5-4c21-adf7-521caf43caa1-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "b9622415-11a5-4c21-adf7-521caf43caa1" (UID: "b9622415-11a5-4c21-adf7-521caf43caa1"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.871605 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9622415-11a5-4c21-adf7-521caf43caa1-kube-api-access-5q95h" (OuterVolumeSpecName: "kube-api-access-5q95h") pod "b9622415-11a5-4c21-adf7-521caf43caa1" (UID: "b9622415-11a5-4c21-adf7-521caf43caa1"). InnerVolumeSpecName "kube-api-access-5q95h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.964794 4796 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9622415-11a5-4c21-adf7-521caf43caa1-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.964867 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q95h\" (UniqueName: \"kubernetes.io/projected/b9622415-11a5-4c21-adf7-521caf43caa1-kube-api-access-5q95h\") on node \"crc\" DevicePath \"\"" Dec 02 20:29:43 crc kubenswrapper[4796]: I1202 20:29:43.964903 4796 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9622415-11a5-4c21-adf7-521caf43caa1-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:29:44 crc kubenswrapper[4796]: I1202 20:29:44.157421 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk"] Dec 02 20:29:44 crc kubenswrapper[4796]: E1202 20:29:44.162150 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9622415_11a5_4c21_adf7_521caf43caa1.slice\": RecentStats: unable to find data in memory cache]" Dec 02 20:29:44 crc kubenswrapper[4796]: I1202 20:29:44.165469 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bd4b77f66-nk2fk"] Dec 02 20:29:45 crc kubenswrapper[4796]: I1202 20:29:45.274867 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9622415-11a5-4c21-adf7-521caf43caa1" path="/var/lib/kubelet/pods/b9622415-11a5-4c21-adf7-521caf43caa1/volumes" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.652053 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Dec 02 20:29:55 crc kubenswrapper[4796]: E1202 20:29:55.653000 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9622415-11a5-4c21-adf7-521caf43caa1" containerName="manager" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.653018 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9622415-11a5-4c21-adf7-521caf43caa1" containerName="manager" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.653221 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9622415-11a5-4c21-adf7-521caf43caa1" containerName="manager" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.654132 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.656358 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-server-dockercfg-rtbv6" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.656359 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"kube-root-ca.crt" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.657215 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-svc" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.657354 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-plugins-conf" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.657359 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-config-data" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.658000 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openshift-service-ca.crt" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.658163 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-default-user" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.658343 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-erlang-cookie" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.658647 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-server-conf" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.673028 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.776810 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-config-data\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.776851 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.776873 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2532c9ed-e144-4ec2-a14e-9168bc5db472\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2532c9ed-e144-4ec2-a14e-9168bc5db472\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.776898 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.776924 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.776984 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.777003 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.777282 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.777380 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.777404 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.777697 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thzfm\" (UniqueName: \"kubernetes.io/projected/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-kube-api-access-thzfm\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.879391 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-config-data\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.879434 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.879460 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2532c9ed-e144-4ec2-a14e-9168bc5db472\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2532c9ed-e144-4ec2-a14e-9168bc5db472\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.879485 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.879509 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.879536 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.879558 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.879601 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.879626 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.879648 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.879732 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thzfm\" (UniqueName: \"kubernetes.io/projected/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-kube-api-access-thzfm\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.880156 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.880527 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-config-data\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.880940 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.881097 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.882178 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.884513 4796 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.884951 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2532c9ed-e144-4ec2-a14e-9168bc5db472\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2532c9ed-e144-4ec2-a14e-9168bc5db472\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ab0f85ce893b7e2a52ecf4e5bdb16b10198f9ef6debee4c06bad21870ed62883/globalmount\"" pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.887358 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.887548 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.898022 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.910551 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.914224 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thzfm\" (UniqueName: \"kubernetes.io/projected/17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2-kube-api-access-thzfm\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.947528 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.948027 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2532c9ed-e144-4ec2-a14e-9168bc5db472\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2532c9ed-e144-4ec2-a14e-9168bc5db472\") pod \"rabbitmq-server-0\" (UID: \"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.948792 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.952531 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-config-data" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.952578 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-notifications-svc" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.952796 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-dockercfg-sjztx" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.952939 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-plugins-conf" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.953036 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-default-user" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.953161 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-conf" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.953245 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Dec 02 20:29:55 crc kubenswrapper[4796]: I1202 20:29:55.979777 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.014267 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.084013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0add539b-c51e-4616-9235-12465a2e5ecb-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.084227 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0add539b-c51e-4616-9235-12465a2e5ecb-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.084345 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0add539b-c51e-4616-9235-12465a2e5ecb-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.084422 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0add539b-c51e-4616-9235-12465a2e5ecb-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.084462 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0add539b-c51e-4616-9235-12465a2e5ecb-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.084484 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0add539b-c51e-4616-9235-12465a2e5ecb-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.084512 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0add539b-c51e-4616-9235-12465a2e5ecb-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.084551 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0add539b-c51e-4616-9235-12465a2e5ecb-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.084594 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0add539b-c51e-4616-9235-12465a2e5ecb-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.084620 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c240e825-43e8-4e16-97eb-2518111f53e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c240e825-43e8-4e16-97eb-2518111f53e1\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.084655 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf78v\" (UniqueName: \"kubernetes.io/projected/0add539b-c51e-4616-9235-12465a2e5ecb-kube-api-access-mf78v\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.187489 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0add539b-c51e-4616-9235-12465a2e5ecb-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.187830 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0add539b-c51e-4616-9235-12465a2e5ecb-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.187858 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0add539b-c51e-4616-9235-12465a2e5ecb-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.187875 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0add539b-c51e-4616-9235-12465a2e5ecb-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.187904 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0add539b-c51e-4616-9235-12465a2e5ecb-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.187933 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0add539b-c51e-4616-9235-12465a2e5ecb-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.187963 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c240e825-43e8-4e16-97eb-2518111f53e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c240e825-43e8-4e16-97eb-2518111f53e1\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.187983 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0add539b-c51e-4616-9235-12465a2e5ecb-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.188019 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf78v\" (UniqueName: \"kubernetes.io/projected/0add539b-c51e-4616-9235-12465a2e5ecb-kube-api-access-mf78v\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.188058 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0add539b-c51e-4616-9235-12465a2e5ecb-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.188099 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0add539b-c51e-4616-9235-12465a2e5ecb-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.188852 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0add539b-c51e-4616-9235-12465a2e5ecb-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.189345 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0add539b-c51e-4616-9235-12465a2e5ecb-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.189517 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0add539b-c51e-4616-9235-12465a2e5ecb-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.189741 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0add539b-c51e-4616-9235-12465a2e5ecb-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.189968 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0add539b-c51e-4616-9235-12465a2e5ecb-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.191724 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0add539b-c51e-4616-9235-12465a2e5ecb-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.195375 4796 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.195440 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c240e825-43e8-4e16-97eb-2518111f53e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c240e825-43e8-4e16-97eb-2518111f53e1\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f9f70481ff70405303ea870b3a191ff71d0a76a549aa770c0fe636bfc1c38b4a/globalmount\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.191870 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0add539b-c51e-4616-9235-12465a2e5ecb-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.200397 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0add539b-c51e-4616-9235-12465a2e5ecb-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.207288 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0add539b-c51e-4616-9235-12465a2e5ecb-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.211172 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf78v\" (UniqueName: \"kubernetes.io/projected/0add539b-c51e-4616-9235-12465a2e5ecb-kube-api-access-mf78v\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.235067 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c240e825-43e8-4e16-97eb-2518111f53e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c240e825-43e8-4e16-97eb-2518111f53e1\") pod \"rabbitmq-notifications-server-0\" (UID: \"0add539b-c51e-4616-9235-12465a2e5ecb\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.272679 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.512368 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Dec 02 20:29:56 crc kubenswrapper[4796]: W1202 20:29:56.707653 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0add539b_c51e_4616_9235_12465a2e5ecb.slice/crio-fe4ccc9be1f9f0906bc121568ad2648881dfb09efe5bd6657b561a08bb1ea41e WatchSource:0}: Error finding container fe4ccc9be1f9f0906bc121568ad2648881dfb09efe5bd6657b561a08bb1ea41e: Status 404 returned error can't find the container with id fe4ccc9be1f9f0906bc121568ad2648881dfb09efe5bd6657b561a08bb1ea41e Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.708162 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.935015 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2","Type":"ContainerStarted","Data":"3a6dd12416d7963b7c4914412d75384308def609e6c0f8ae734f69fc541fcfd4"} Dec 02 20:29:56 crc kubenswrapper[4796]: I1202 20:29:56.936973 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"0add539b-c51e-4616-9235-12465a2e5ecb","Type":"ContainerStarted","Data":"fe4ccc9be1f9f0906bc121568ad2648881dfb09efe5bd6657b561a08bb1ea41e"} Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.130302 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.131746 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.138847 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"galera-openstack-dockercfg-4985r" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.139217 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config-data" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.139442 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-galera-openstack-svc" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.140907 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-scripts" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.141205 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"combined-ca-bundle" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.142798 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.208146 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8100d7-1ae6-4220-88d4-527f681270b3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.208286 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c8100d7-1ae6-4220-88d4-527f681270b3-kolla-config\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.208370 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8100d7-1ae6-4220-88d4-527f681270b3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.208401 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7c8100d7-1ae6-4220-88d4-527f681270b3-config-data-default\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.208496 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8100d7-1ae6-4220-88d4-527f681270b3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.208526 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbgmx\" (UniqueName: \"kubernetes.io/projected/7c8100d7-1ae6-4220-88d4-527f681270b3-kube-api-access-bbgmx\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.208617 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7c8100d7-1ae6-4220-88d4-527f681270b3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.208642 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d2a98bb9-a58a-457f-8011-20b9665342e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2a98bb9-a58a-457f-8011-20b9665342e6\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.309675 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8100d7-1ae6-4220-88d4-527f681270b3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.309748 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c8100d7-1ae6-4220-88d4-527f681270b3-kolla-config\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.309778 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8100d7-1ae6-4220-88d4-527f681270b3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.309806 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7c8100d7-1ae6-4220-88d4-527f681270b3-config-data-default\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.309851 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8100d7-1ae6-4220-88d4-527f681270b3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.309877 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbgmx\" (UniqueName: \"kubernetes.io/projected/7c8100d7-1ae6-4220-88d4-527f681270b3-kube-api-access-bbgmx\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.309912 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7c8100d7-1ae6-4220-88d4-527f681270b3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.309942 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d2a98bb9-a58a-457f-8011-20b9665342e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2a98bb9-a58a-457f-8011-20b9665342e6\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.311704 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7c8100d7-1ae6-4220-88d4-527f681270b3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.316441 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config-data" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.316719 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-galera-openstack-svc" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.320623 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"combined-ca-bundle" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.325667 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-scripts" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.327077 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c8100d7-1ae6-4220-88d4-527f681270b3-kolla-config\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.327431 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7c8100d7-1ae6-4220-88d4-527f681270b3-config-data-default\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.333029 4796 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.333078 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d2a98bb9-a58a-457f-8011-20b9665342e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2a98bb9-a58a-457f-8011-20b9665342e6\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/512edc163f75875fb10d6ccdc8808289f6c7da8f682b02f66f58111f9d645d4b/globalmount\"" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.333076 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8100d7-1ae6-4220-88d4-527f681270b3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.341904 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbgmx\" (UniqueName: \"kubernetes.io/projected/7c8100d7-1ae6-4220-88d4-527f681270b3-kube-api-access-bbgmx\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.342769 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8100d7-1ae6-4220-88d4-527f681270b3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.371569 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8100d7-1ae6-4220-88d4-527f681270b3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.393493 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d2a98bb9-a58a-457f-8011-20b9665342e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2a98bb9-a58a-457f-8011-20b9665342e6\") pod \"openstack-galera-0\" (UID: \"7c8100d7-1ae6-4220-88d4-527f681270b3\") " pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.401510 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.402623 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.406673 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-mkczv" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.410796 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.411667 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.411855 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.466740 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"galera-openstack-dockercfg-4985r" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.474745 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.517539 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebca558f-ecfa-4b05-b9df-b59f884f0366-config-data\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.518089 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hr4j\" (UniqueName: \"kubernetes.io/projected/ebca558f-ecfa-4b05-b9df-b59f884f0366-kube-api-access-6hr4j\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.518134 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebca558f-ecfa-4b05-b9df-b59f884f0366-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.518175 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebca558f-ecfa-4b05-b9df-b59f884f0366-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.518215 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ebca558f-ecfa-4b05-b9df-b59f884f0366-kolla-config\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.620458 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ebca558f-ecfa-4b05-b9df-b59f884f0366-kolla-config\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.620535 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebca558f-ecfa-4b05-b9df-b59f884f0366-config-data\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.620570 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hr4j\" (UniqueName: \"kubernetes.io/projected/ebca558f-ecfa-4b05-b9df-b59f884f0366-kube-api-access-6hr4j\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.620602 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebca558f-ecfa-4b05-b9df-b59f884f0366-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.621205 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebca558f-ecfa-4b05-b9df-b59f884f0366-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.625104 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebca558f-ecfa-4b05-b9df-b59f884f0366-config-data\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.625339 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ebca558f-ecfa-4b05-b9df-b59f884f0366-kolla-config\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.627881 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebca558f-ecfa-4b05-b9df-b59f884f0366-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.628738 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebca558f-ecfa-4b05-b9df-b59f884f0366-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.663418 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hr4j\" (UniqueName: \"kubernetes.io/projected/ebca558f-ecfa-4b05-b9df-b59f884f0366-kube-api-access-6hr4j\") pod \"memcached-0\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.694433 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.695498 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.710305 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"telemetry-ceilometer-dockercfg-b2hsb" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.712827 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.745721 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.826063 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5krrx\" (UniqueName: \"kubernetes.io/projected/52ef12c4-98d6-4208-b52c-b32a152b87bc-kube-api-access-5krrx\") pod \"kube-state-metrics-0\" (UID: \"52ef12c4-98d6-4208-b52c-b32a152b87bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:29:57 crc kubenswrapper[4796]: I1202 20:29:57.927331 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5krrx\" (UniqueName: \"kubernetes.io/projected/52ef12c4-98d6-4208-b52c-b32a152b87bc-kube-api-access-5krrx\") pod \"kube-state-metrics-0\" (UID: \"52ef12c4-98d6-4208-b52c-b32a152b87bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.021358 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5krrx\" (UniqueName: \"kubernetes.io/projected/52ef12c4-98d6-4208-b52c-b32a152b87bc-kube-api-access-5krrx\") pod \"kube-state-metrics-0\" (UID: \"52ef12c4-98d6-4208-b52c-b32a152b87bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.046294 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.134668 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.580916 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.586509 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.591794 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-alertmanager-dockercfg-jv97k" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.591853 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-cluster-tls-config" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.591808 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-generated" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.592547 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-tls-assets-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.592665 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-web-config" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.617829 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.665525 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxn22\" (UniqueName: \"kubernetes.io/projected/2b4c0794-6b59-4170-8508-0e37663c7094-kube-api-access-cxn22\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.665609 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/2b4c0794-6b59-4170-8508-0e37663c7094-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.665667 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2b4c0794-6b59-4170-8508-0e37663c7094-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.665731 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2b4c0794-6b59-4170-8508-0e37663c7094-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.665890 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2b4c0794-6b59-4170-8508-0e37663c7094-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.665966 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2b4c0794-6b59-4170-8508-0e37663c7094-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.666019 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/2b4c0794-6b59-4170-8508-0e37663c7094-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.668001 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.767083 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2b4c0794-6b59-4170-8508-0e37663c7094-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.767133 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2b4c0794-6b59-4170-8508-0e37663c7094-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.767157 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2b4c0794-6b59-4170-8508-0e37663c7094-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.767180 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/2b4c0794-6b59-4170-8508-0e37663c7094-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.767207 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxn22\" (UniqueName: \"kubernetes.io/projected/2b4c0794-6b59-4170-8508-0e37663c7094-kube-api-access-cxn22\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.767277 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/2b4c0794-6b59-4170-8508-0e37663c7094-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.767316 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2b4c0794-6b59-4170-8508-0e37663c7094-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.768851 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/2b4c0794-6b59-4170-8508-0e37663c7094-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.773891 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2b4c0794-6b59-4170-8508-0e37663c7094-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.776294 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2b4c0794-6b59-4170-8508-0e37663c7094-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.776407 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2b4c0794-6b59-4170-8508-0e37663c7094-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.780312 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2b4c0794-6b59-4170-8508-0e37663c7094-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.782987 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/2b4c0794-6b59-4170-8508-0e37663c7094-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.798040 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxn22\" (UniqueName: \"kubernetes.io/projected/2b4c0794-6b59-4170-8508-0e37663c7094-kube-api-access-cxn22\") pod \"alertmanager-metric-storage-0\" (UID: \"2b4c0794-6b59-4170-8508-0e37663c7094\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.915380 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.982699 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc"] Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.989433 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.992436 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Dec 02 20:29:58 crc kubenswrapper[4796]: I1202 20:29:58.992694 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-8mshc" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.009743 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc"] Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.060211 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"ebca558f-ecfa-4b05-b9df-b59f884f0366","Type":"ContainerStarted","Data":"ff4a8380c7245b581d230d863d413ba391bb99bd8b774b79e660b7088eaacb0e"} Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.076486 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66c828b2-afdf-4b46-9199-a7c0ceaf3942-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-94mrc\" (UID: \"66c828b2-afdf-4b46-9199-a7c0ceaf3942\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.076977 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6pfk\" (UniqueName: \"kubernetes.io/projected/66c828b2-afdf-4b46-9199-a7c0ceaf3942-kube-api-access-r6pfk\") pod \"observability-ui-dashboards-7d5fb4cbfb-94mrc\" (UID: \"66c828b2-afdf-4b46-9199-a7c0ceaf3942\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.093480 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"7c8100d7-1ae6-4220-88d4-527f681270b3","Type":"ContainerStarted","Data":"e843509507321c204c1bbdf5e28d6a2d424028c64a084333a8b9a0f68261c22f"} Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.119195 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.180332 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6pfk\" (UniqueName: \"kubernetes.io/projected/66c828b2-afdf-4b46-9199-a7c0ceaf3942-kube-api-access-r6pfk\") pod \"observability-ui-dashboards-7d5fb4cbfb-94mrc\" (UID: \"66c828b2-afdf-4b46-9199-a7c0ceaf3942\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.180481 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66c828b2-afdf-4b46-9199-a7c0ceaf3942-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-94mrc\" (UID: \"66c828b2-afdf-4b46-9199-a7c0ceaf3942\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc" Dec 02 20:29:59 crc kubenswrapper[4796]: E1202 20:29:59.180656 4796 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Dec 02 20:29:59 crc kubenswrapper[4796]: E1202 20:29:59.180752 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66c828b2-afdf-4b46-9199-a7c0ceaf3942-serving-cert podName:66c828b2-afdf-4b46-9199-a7c0ceaf3942 nodeName:}" failed. No retries permitted until 2025-12-02 20:29:59.680724221 +0000 UTC m=+1082.684099755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/66c828b2-afdf-4b46-9199-a7c0ceaf3942-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-94mrc" (UID: "66c828b2-afdf-4b46-9199-a7c0ceaf3942") : secret "observability-ui-dashboards" not found Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.226193 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6pfk\" (UniqueName: \"kubernetes.io/projected/66c828b2-afdf-4b46-9199-a7c0ceaf3942-kube-api-access-r6pfk\") pod \"observability-ui-dashboards-7d5fb4cbfb-94mrc\" (UID: \"66c828b2-afdf-4b46-9199-a7c0ceaf3942\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.238329 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.279806 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.285648 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.286459 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.298121 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.298297 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.298791 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-sknx5" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.301002 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.367894 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.488972 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.489034 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.489069 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.489093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.489120 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-config\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.489140 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.489228 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2hhw\" (UniqueName: \"kubernetes.io/projected/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-kube-api-access-v2hhw\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.489273 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.497799 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-68944d7b94-tn494"] Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.499751 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.523195 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68944d7b94-tn494"] Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591215 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591358 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591400 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-config\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591428 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591468 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f024894d-6e8c-403f-a9b2-11561febb3be-console-config\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591519 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f024894d-6e8c-403f-a9b2-11561febb3be-service-ca\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591544 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f024894d-6e8c-403f-a9b2-11561febb3be-trusted-ca-bundle\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591709 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2hhw\" (UniqueName: \"kubernetes.io/projected/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-kube-api-access-v2hhw\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591740 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f024894d-6e8c-403f-a9b2-11561febb3be-console-serving-cert\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591811 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591859 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6pb2\" (UniqueName: \"kubernetes.io/projected/f024894d-6e8c-403f-a9b2-11561febb3be-kube-api-access-t6pb2\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591892 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591920 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f024894d-6e8c-403f-a9b2-11561febb3be-console-oauth-config\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.591968 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f024894d-6e8c-403f-a9b2-11561febb3be-oauth-serving-cert\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.592193 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.594074 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.601781 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.601944 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.602440 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.602930 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.605237 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-config\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.612176 4796 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.612228 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d6a10f67185551a5fa7e32cf10383bff9b63c518132820d4404eb264d6f6191b/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.625983 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2hhw\" (UniqueName: \"kubernetes.io/projected/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-kube-api-access-v2hhw\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.690343 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\") pod \"prometheus-metric-storage-0\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.694608 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f024894d-6e8c-403f-a9b2-11561febb3be-oauth-serving-cert\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.694716 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f024894d-6e8c-403f-a9b2-11561febb3be-console-config\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.694771 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f024894d-6e8c-403f-a9b2-11561febb3be-service-ca\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.694792 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f024894d-6e8c-403f-a9b2-11561febb3be-trusted-ca-bundle\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.694838 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66c828b2-afdf-4b46-9199-a7c0ceaf3942-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-94mrc\" (UID: \"66c828b2-afdf-4b46-9199-a7c0ceaf3942\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.694869 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f024894d-6e8c-403f-a9b2-11561febb3be-console-serving-cert\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.694920 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6pb2\" (UniqueName: \"kubernetes.io/projected/f024894d-6e8c-403f-a9b2-11561febb3be-kube-api-access-t6pb2\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.694957 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f024894d-6e8c-403f-a9b2-11561febb3be-console-oauth-config\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.696009 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f024894d-6e8c-403f-a9b2-11561febb3be-trusted-ca-bundle\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.696055 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f024894d-6e8c-403f-a9b2-11561febb3be-service-ca\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.696600 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f024894d-6e8c-403f-a9b2-11561febb3be-console-config\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.697097 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f024894d-6e8c-403f-a9b2-11561febb3be-oauth-serving-cert\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.698962 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66c828b2-afdf-4b46-9199-a7c0ceaf3942-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-94mrc\" (UID: \"66c828b2-afdf-4b46-9199-a7c0ceaf3942\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.703972 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f024894d-6e8c-403f-a9b2-11561febb3be-console-oauth-config\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.705301 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f024894d-6e8c-403f-a9b2-11561febb3be-console-serving-cert\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.723090 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6pb2\" (UniqueName: \"kubernetes.io/projected/f024894d-6e8c-403f-a9b2-11561febb3be-kube-api-access-t6pb2\") pod \"console-68944d7b94-tn494\" (UID: \"f024894d-6e8c-403f-a9b2-11561febb3be\") " pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.847393 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.924908 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.959416 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc" Dec 02 20:29:59 crc kubenswrapper[4796]: I1202 20:29:59.971672 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.112897 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"2b4c0794-6b59-4170-8508-0e37663c7094","Type":"ContainerStarted","Data":"e852827cf35bc4643c0809b4fbfc2a7a2664a5d441e3dfd785d4721040fc5aa5"} Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.114813 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"52ef12c4-98d6-4208-b52c-b32a152b87bc","Type":"ContainerStarted","Data":"6b43e155c753412fc53b8d234d1e68baddf05c659d04dbd1cd01218837607422"} Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.168608 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5"] Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.169695 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.175652 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.175875 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.184466 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5"] Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.305816 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm5xz\" (UniqueName: \"kubernetes.io/projected/2d6df2aa-6e9f-449f-8453-a593809f31ba-kube-api-access-bm5xz\") pod \"collect-profiles-29411790-5lhq5\" (UID: \"2d6df2aa-6e9f-449f-8453-a593809f31ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.305889 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6df2aa-6e9f-449f-8453-a593809f31ba-config-volume\") pod \"collect-profiles-29411790-5lhq5\" (UID: \"2d6df2aa-6e9f-449f-8453-a593809f31ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.305929 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6df2aa-6e9f-449f-8453-a593809f31ba-secret-volume\") pod \"collect-profiles-29411790-5lhq5\" (UID: \"2d6df2aa-6e9f-449f-8453-a593809f31ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.409776 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6df2aa-6e9f-449f-8453-a593809f31ba-secret-volume\") pod \"collect-profiles-29411790-5lhq5\" (UID: \"2d6df2aa-6e9f-449f-8453-a593809f31ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.409896 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm5xz\" (UniqueName: \"kubernetes.io/projected/2d6df2aa-6e9f-449f-8453-a593809f31ba-kube-api-access-bm5xz\") pod \"collect-profiles-29411790-5lhq5\" (UID: \"2d6df2aa-6e9f-449f-8453-a593809f31ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.409988 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6df2aa-6e9f-449f-8453-a593809f31ba-config-volume\") pod \"collect-profiles-29411790-5lhq5\" (UID: \"2d6df2aa-6e9f-449f-8453-a593809f31ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.410854 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6df2aa-6e9f-449f-8453-a593809f31ba-config-volume\") pod \"collect-profiles-29411790-5lhq5\" (UID: \"2d6df2aa-6e9f-449f-8453-a593809f31ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.422960 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6df2aa-6e9f-449f-8453-a593809f31ba-secret-volume\") pod \"collect-profiles-29411790-5lhq5\" (UID: \"2d6df2aa-6e9f-449f-8453-a593809f31ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.451176 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm5xz\" (UniqueName: \"kubernetes.io/projected/2d6df2aa-6e9f-449f-8453-a593809f31ba-kube-api-access-bm5xz\") pod \"collect-profiles-29411790-5lhq5\" (UID: \"2d6df2aa-6e9f-449f-8453-a593809f31ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.463426 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68944d7b94-tn494"] Dec 02 20:30:00 crc kubenswrapper[4796]: I1202 20:30:00.521548 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:00 crc kubenswrapper[4796]: W1202 20:30:00.786213 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf024894d_6e8c_403f_a9b2_11561febb3be.slice/crio-8db99ab923048cb4e822baaac97aec743a508fc036f04b1ea638a2d4d73b5f6a WatchSource:0}: Error finding container 8db99ab923048cb4e822baaac97aec743a508fc036f04b1ea638a2d4d73b5f6a: Status 404 returned error can't find the container with id 8db99ab923048cb4e822baaac97aec743a508fc036f04b1ea638a2d4d73b5f6a Dec 02 20:30:01 crc kubenswrapper[4796]: I1202 20:30:01.125533 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68944d7b94-tn494" event={"ID":"f024894d-6e8c-403f-a9b2-11561febb3be","Type":"ContainerStarted","Data":"8db99ab923048cb4e822baaac97aec743a508fc036f04b1ea638a2d4d73b5f6a"} Dec 02 20:30:01 crc kubenswrapper[4796]: I1202 20:30:01.330818 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc"] Dec 02 20:30:01 crc kubenswrapper[4796]: I1202 20:30:01.868225 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 20:30:01 crc kubenswrapper[4796]: W1202 20:30:01.898420 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadfbe1bd_5da9_4d9c_86d6_7dcfe9a5fec1.slice/crio-695cc5bda7f7467f8c82d3f79770417db65af62118732e2628db29d740b6d39a WatchSource:0}: Error finding container 695cc5bda7f7467f8c82d3f79770417db65af62118732e2628db29d740b6d39a: Status 404 returned error can't find the container with id 695cc5bda7f7467f8c82d3f79770417db65af62118732e2628db29d740b6d39a Dec 02 20:30:02 crc kubenswrapper[4796]: I1202 20:30:02.140557 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1","Type":"ContainerStarted","Data":"695cc5bda7f7467f8c82d3f79770417db65af62118732e2628db29d740b6d39a"} Dec 02 20:30:02 crc kubenswrapper[4796]: I1202 20:30:02.142479 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc" event={"ID":"66c828b2-afdf-4b46-9199-a7c0ceaf3942","Type":"ContainerStarted","Data":"078ad70e16157a760d275857a055f39d9aecf68475f93d92c1801b4f975f0998"} Dec 02 20:30:02 crc kubenswrapper[4796]: I1202 20:30:02.381113 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5"] Dec 02 20:30:03 crc kubenswrapper[4796]: I1202 20:30:03.152555 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" event={"ID":"2d6df2aa-6e9f-449f-8453-a593809f31ba","Type":"ContainerStarted","Data":"c330f3a1d0e9f31b1a52a25434fdb20b2b3d231c9121181deea57f5c4ae4c468"} Dec 02 20:30:03 crc kubenswrapper[4796]: I1202 20:30:03.155380 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68944d7b94-tn494" event={"ID":"f024894d-6e8c-403f-a9b2-11561febb3be","Type":"ContainerStarted","Data":"d0ec33fc3ddaf206063241d84a5fb6cdd13d4dfad7e73b1d1378d73142066b95"} Dec 02 20:30:03 crc kubenswrapper[4796]: I1202 20:30:03.176131 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-68944d7b94-tn494" podStartSLOduration=4.176115088 podStartE2EDuration="4.176115088s" podCreationTimestamp="2025-12-02 20:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:30:03.174195911 +0000 UTC m=+1086.177571445" watchObservedRunningTime="2025-12-02 20:30:03.176115088 +0000 UTC m=+1086.179490622" Dec 02 20:30:09 crc kubenswrapper[4796]: I1202 20:30:09.847885 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:30:09 crc kubenswrapper[4796]: I1202 20:30:09.848545 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:30:09 crc kubenswrapper[4796]: I1202 20:30:09.854788 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:30:10 crc kubenswrapper[4796]: I1202 20:30:10.220740 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-68944d7b94-tn494" Dec 02 20:30:10 crc kubenswrapper[4796]: I1202 20:30:10.320006 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57c46bb884-zwq4z"] Dec 02 20:30:13 crc kubenswrapper[4796]: E1202 20:30:13.407542 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 02 20:30:13 crc kubenswrapper[4796]: E1202 20:30:13.408119 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_watcher-kuttl-default(0add539b-c51e-4616-9235-12465a2e5ecb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 20:30:13 crc kubenswrapper[4796]: E1202 20:30:13.409347 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podUID="0add539b-c51e-4616-9235-12465a2e5ecb" Dec 02 20:30:13 crc kubenswrapper[4796]: E1202 20:30:13.430098 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 02 20:30:13 crc kubenswrapper[4796]: E1202 20:30:13.430278 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-thzfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_watcher-kuttl-default(17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 20:30:13 crc kubenswrapper[4796]: E1202 20:30:13.431509 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/rabbitmq-server-0" podUID="17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2" Dec 02 20:30:14 crc kubenswrapper[4796]: E1202 20:30:14.253288 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podUID="0add539b-c51e-4616-9235-12465a2e5ecb" Dec 02 20:30:14 crc kubenswrapper[4796]: E1202 20:30:14.253736 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="watcher-kuttl-default/rabbitmq-server-0" podUID="17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2" Dec 02 20:30:15 crc kubenswrapper[4796]: E1202 20:30:15.389813 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Dec 02 20:30:15 crc kubenswrapper[4796]: E1202 20:30:15.390579 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n544h694h54fh675h65fh74hc9h5bfh89h646h8dh649h679h595h58bh648h65bh8dh557h56bh86h5c6h89h54h7bhc9h668h64chf6h5c4h5cfh64dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hr4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_watcher-kuttl-default(ebca558f-ecfa-4b05-b9df-b59f884f0366): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 20:30:15 crc kubenswrapper[4796]: E1202 20:30:15.391989 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/memcached-0" podUID="ebca558f-ecfa-4b05-b9df-b59f884f0366" Dec 02 20:30:16 crc kubenswrapper[4796]: E1202 20:30:16.268829 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="watcher-kuttl-default/memcached-0" podUID="ebca558f-ecfa-4b05-b9df-b59f884f0366" Dec 02 20:30:17 crc kubenswrapper[4796]: E1202 20:30:17.016898 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 02 20:30:17 crc kubenswrapper[4796]: E1202 20:30:17.017254 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbgmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_watcher-kuttl-default(7c8100d7-1ae6-4220-88d4-527f681270b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 20:30:17 crc kubenswrapper[4796]: E1202 20:30:17.018530 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/openstack-galera-0" podUID="7c8100d7-1ae6-4220-88d4-527f681270b3" Dec 02 20:30:17 crc kubenswrapper[4796]: E1202 20:30:17.278108 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="watcher-kuttl-default/openstack-galera-0" podUID="7c8100d7-1ae6-4220-88d4-527f681270b3" Dec 02 20:30:18 crc kubenswrapper[4796]: I1202 20:30:18.283806 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc" event={"ID":"66c828b2-afdf-4b46-9199-a7c0ceaf3942","Type":"ContainerStarted","Data":"d47d05292e32794ca6b869fcb212d6e6fe398cf81920f4cafb29763b981342e9"} Dec 02 20:30:18 crc kubenswrapper[4796]: I1202 20:30:18.285968 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"52ef12c4-98d6-4208-b52c-b32a152b87bc","Type":"ContainerStarted","Data":"41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c"} Dec 02 20:30:18 crc kubenswrapper[4796]: I1202 20:30:18.286086 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:30:18 crc kubenswrapper[4796]: I1202 20:30:18.288495 4796 generic.go:334] "Generic (PLEG): container finished" podID="2d6df2aa-6e9f-449f-8453-a593809f31ba" containerID="0684c5d9f3956a85ea1f6ba2679defe6dacefdfaa3d844c8a49c9f7fd9fff88c" exitCode=0 Dec 02 20:30:18 crc kubenswrapper[4796]: I1202 20:30:18.288542 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" event={"ID":"2d6df2aa-6e9f-449f-8453-a593809f31ba","Type":"ContainerDied","Data":"0684c5d9f3956a85ea1f6ba2679defe6dacefdfaa3d844c8a49c9f7fd9fff88c"} Dec 02 20:30:18 crc kubenswrapper[4796]: I1202 20:30:18.305817 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-94mrc" podStartSLOduration=4.186938784 podStartE2EDuration="20.305800607s" podCreationTimestamp="2025-12-02 20:29:58 +0000 UTC" firstStartedPulling="2025-12-02 20:30:01.416328491 +0000 UTC m=+1084.419704015" lastFinishedPulling="2025-12-02 20:30:17.535190284 +0000 UTC m=+1100.538565838" observedRunningTime="2025-12-02 20:30:18.30386132 +0000 UTC m=+1101.307236864" watchObservedRunningTime="2025-12-02 20:30:18.305800607 +0000 UTC m=+1101.309176141" Dec 02 20:30:18 crc kubenswrapper[4796]: I1202 20:30:18.362967 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=3.020994494 podStartE2EDuration="21.362939893s" podCreationTimestamp="2025-12-02 20:29:57 +0000 UTC" firstStartedPulling="2025-12-02 20:29:59.18600946 +0000 UTC m=+1082.189384994" lastFinishedPulling="2025-12-02 20:30:17.527954829 +0000 UTC m=+1100.531330393" observedRunningTime="2025-12-02 20:30:18.359054909 +0000 UTC m=+1101.362430443" watchObservedRunningTime="2025-12-02 20:30:18.362939893 +0000 UTC m=+1101.366315437" Dec 02 20:30:19 crc kubenswrapper[4796]: I1202 20:30:19.682035 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:19 crc kubenswrapper[4796]: I1202 20:30:19.873628 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6df2aa-6e9f-449f-8453-a593809f31ba-secret-volume\") pod \"2d6df2aa-6e9f-449f-8453-a593809f31ba\" (UID: \"2d6df2aa-6e9f-449f-8453-a593809f31ba\") " Dec 02 20:30:19 crc kubenswrapper[4796]: I1202 20:30:19.873845 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm5xz\" (UniqueName: \"kubernetes.io/projected/2d6df2aa-6e9f-449f-8453-a593809f31ba-kube-api-access-bm5xz\") pod \"2d6df2aa-6e9f-449f-8453-a593809f31ba\" (UID: \"2d6df2aa-6e9f-449f-8453-a593809f31ba\") " Dec 02 20:30:19 crc kubenswrapper[4796]: I1202 20:30:19.873926 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6df2aa-6e9f-449f-8453-a593809f31ba-config-volume\") pod \"2d6df2aa-6e9f-449f-8453-a593809f31ba\" (UID: \"2d6df2aa-6e9f-449f-8453-a593809f31ba\") " Dec 02 20:30:19 crc kubenswrapper[4796]: I1202 20:30:19.874914 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6df2aa-6e9f-449f-8453-a593809f31ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d6df2aa-6e9f-449f-8453-a593809f31ba" (UID: "2d6df2aa-6e9f-449f-8453-a593809f31ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:30:19 crc kubenswrapper[4796]: I1202 20:30:19.976575 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6df2aa-6e9f-449f-8453-a593809f31ba-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 20:30:20 crc kubenswrapper[4796]: I1202 20:30:20.137580 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6df2aa-6e9f-449f-8453-a593809f31ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d6df2aa-6e9f-449f-8453-a593809f31ba" (UID: "2d6df2aa-6e9f-449f-8453-a593809f31ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:30:20 crc kubenswrapper[4796]: I1202 20:30:20.138391 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6df2aa-6e9f-449f-8453-a593809f31ba-kube-api-access-bm5xz" (OuterVolumeSpecName: "kube-api-access-bm5xz") pod "2d6df2aa-6e9f-449f-8453-a593809f31ba" (UID: "2d6df2aa-6e9f-449f-8453-a593809f31ba"). InnerVolumeSpecName "kube-api-access-bm5xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:30:20 crc kubenswrapper[4796]: I1202 20:30:20.179151 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm5xz\" (UniqueName: \"kubernetes.io/projected/2d6df2aa-6e9f-449f-8453-a593809f31ba-kube-api-access-bm5xz\") on node \"crc\" DevicePath \"\"" Dec 02 20:30:20 crc kubenswrapper[4796]: I1202 20:30:20.179191 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6df2aa-6e9f-449f-8453-a593809f31ba-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 20:30:20 crc kubenswrapper[4796]: I1202 20:30:20.324160 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" event={"ID":"2d6df2aa-6e9f-449f-8453-a593809f31ba","Type":"ContainerDied","Data":"c330f3a1d0e9f31b1a52a25434fdb20b2b3d231c9121181deea57f5c4ae4c468"} Dec 02 20:30:20 crc kubenswrapper[4796]: I1202 20:30:20.324192 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411790-5lhq5" Dec 02 20:30:20 crc kubenswrapper[4796]: I1202 20:30:20.324208 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c330f3a1d0e9f31b1a52a25434fdb20b2b3d231c9121181deea57f5c4ae4c468" Dec 02 20:30:20 crc kubenswrapper[4796]: I1202 20:30:20.325702 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1","Type":"ContainerStarted","Data":"d590de5afef42d93b9a38bbfd4aa330e60d1f4f7d70c536db133d6f9063367ac"} Dec 02 20:30:21 crc kubenswrapper[4796]: I1202 20:30:21.337856 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"2b4c0794-6b59-4170-8508-0e37663c7094","Type":"ContainerStarted","Data":"88a80f5514b9683c785aa5c0c8fffce950d3e3053659a3ef31eae33e4b5af5f8"} Dec 02 20:30:25 crc kubenswrapper[4796]: I1202 20:30:25.189143 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:30:25 crc kubenswrapper[4796]: I1202 20:30:25.189578 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:30:28 crc kubenswrapper[4796]: I1202 20:30:28.050426 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:30:29 crc kubenswrapper[4796]: I1202 20:30:29.051564 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2","Type":"ContainerStarted","Data":"4a2d5fc8422b53091db0e40142071aed035039d662e01fd69453572d3b77ca68"} Dec 02 20:30:29 crc kubenswrapper[4796]: I1202 20:30:29.056728 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" podUID="a5e8b895-e788-44f4-8481-520f1cbd75c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 20:30:29 crc kubenswrapper[4796]: I1202 20:30:29.057003 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-8tgcn" podUID="a5e8b895-e788-44f4-8481-520f1cbd75c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 20:30:29 crc kubenswrapper[4796]: I1202 20:30:29.056990 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" podUID="0b0e9209-3a80-4f10-9b56-4d3d28d0dee2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 20:30:29 crc kubenswrapper[4796]: I1202 20:30:29.057032 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-lgj8f" podUID="0b0e9209-3a80-4f10-9b56-4d3d28d0dee2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 20:30:30 crc kubenswrapper[4796]: I1202 20:30:30.065115 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"7c8100d7-1ae6-4220-88d4-527f681270b3","Type":"ContainerStarted","Data":"8c5e89b75d6719ac8679d9c84f5464ed979fa944afedc3ebd342a8c879d790c1"} Dec 02 20:30:30 crc kubenswrapper[4796]: I1202 20:30:30.069102 4796 generic.go:334] "Generic (PLEG): container finished" podID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerID="d590de5afef42d93b9a38bbfd4aa330e60d1f4f7d70c536db133d6f9063367ac" exitCode=0 Dec 02 20:30:30 crc kubenswrapper[4796]: I1202 20:30:30.069162 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1","Type":"ContainerDied","Data":"d590de5afef42d93b9a38bbfd4aa330e60d1f4f7d70c536db133d6f9063367ac"} Dec 02 20:30:30 crc kubenswrapper[4796]: I1202 20:30:30.072060 4796 generic.go:334] "Generic (PLEG): container finished" podID="2b4c0794-6b59-4170-8508-0e37663c7094" containerID="88a80f5514b9683c785aa5c0c8fffce950d3e3053659a3ef31eae33e4b5af5f8" exitCode=0 Dec 02 20:30:30 crc kubenswrapper[4796]: I1202 20:30:30.072089 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"2b4c0794-6b59-4170-8508-0e37663c7094","Type":"ContainerDied","Data":"88a80f5514b9683c785aa5c0c8fffce950d3e3053659a3ef31eae33e4b5af5f8"} Dec 02 20:30:31 crc kubenswrapper[4796]: I1202 20:30:31.082969 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"0add539b-c51e-4616-9235-12465a2e5ecb","Type":"ContainerStarted","Data":"05c9d91658391647a4c84ecf0db296c8f5df38fd027cdcf46ed199babba8233f"} Dec 02 20:30:31 crc kubenswrapper[4796]: I1202 20:30:31.085545 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"ebca558f-ecfa-4b05-b9df-b59f884f0366","Type":"ContainerStarted","Data":"44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42"} Dec 02 20:30:31 crc kubenswrapper[4796]: I1202 20:30:31.086058 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Dec 02 20:30:31 crc kubenswrapper[4796]: I1202 20:30:31.148245 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.148550802 podStartE2EDuration="34.148226835s" podCreationTimestamp="2025-12-02 20:29:57 +0000 UTC" firstStartedPulling="2025-12-02 20:29:58.656158797 +0000 UTC m=+1081.659534331" lastFinishedPulling="2025-12-02 20:30:30.65583479 +0000 UTC m=+1113.659210364" observedRunningTime="2025-12-02 20:30:31.144120294 +0000 UTC m=+1114.147495818" watchObservedRunningTime="2025-12-02 20:30:31.148226835 +0000 UTC m=+1114.151602369" Dec 02 20:30:33 crc kubenswrapper[4796]: I1202 20:30:33.109083 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"2b4c0794-6b59-4170-8508-0e37663c7094","Type":"ContainerStarted","Data":"cf2d921b69423d086243a56236da6da71e7fed1ffb3239f0646512fc5c9c5bbd"} Dec 02 20:30:34 crc kubenswrapper[4796]: I1202 20:30:34.125718 4796 generic.go:334] "Generic (PLEG): container finished" podID="7c8100d7-1ae6-4220-88d4-527f681270b3" containerID="8c5e89b75d6719ac8679d9c84f5464ed979fa944afedc3ebd342a8c879d790c1" exitCode=0 Dec 02 20:30:34 crc kubenswrapper[4796]: I1202 20:30:34.125806 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"7c8100d7-1ae6-4220-88d4-527f681270b3","Type":"ContainerDied","Data":"8c5e89b75d6719ac8679d9c84f5464ed979fa944afedc3ebd342a8c879d790c1"} Dec 02 20:30:35 crc kubenswrapper[4796]: I1202 20:30:35.390124 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57c46bb884-zwq4z" podUID="bcaddfd6-aacd-4eae-a723-2837f69c9ecd" containerName="console" containerID="cri-o://f9fc04b95847febce56605560a410b199af24c474279dcc444da0ab1b5ca491b" gracePeriod=15 Dec 02 20:30:36 crc kubenswrapper[4796]: I1202 20:30:36.147236 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"2b4c0794-6b59-4170-8508-0e37663c7094","Type":"ContainerStarted","Data":"196d87f1cf6c5de072969db95231f8ab9c5058057377575e214061623524ed71"} Dec 02 20:30:36 crc kubenswrapper[4796]: I1202 20:30:36.148958 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:30:36 crc kubenswrapper[4796]: I1202 20:30:36.150695 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57c46bb884-zwq4z_bcaddfd6-aacd-4eae-a723-2837f69c9ecd/console/0.log" Dec 02 20:30:36 crc kubenswrapper[4796]: I1202 20:30:36.150728 4796 generic.go:334] "Generic (PLEG): container finished" podID="bcaddfd6-aacd-4eae-a723-2837f69c9ecd" containerID="f9fc04b95847febce56605560a410b199af24c474279dcc444da0ab1b5ca491b" exitCode=2 Dec 02 20:30:36 crc kubenswrapper[4796]: I1202 20:30:36.150754 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57c46bb884-zwq4z" event={"ID":"bcaddfd6-aacd-4eae-a723-2837f69c9ecd","Type":"ContainerDied","Data":"f9fc04b95847febce56605560a410b199af24c474279dcc444da0ab1b5ca491b"} Dec 02 20:30:36 crc kubenswrapper[4796]: I1202 20:30:36.152607 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Dec 02 20:30:36 crc kubenswrapper[4796]: I1202 20:30:36.174964 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/alertmanager-metric-storage-0" podStartSLOduration=5.549020724 podStartE2EDuration="38.174935185s" podCreationTimestamp="2025-12-02 20:29:58 +0000 UTC" firstStartedPulling="2025-12-02 20:30:00.114157424 +0000 UTC m=+1083.117532958" lastFinishedPulling="2025-12-02 20:30:32.740071855 +0000 UTC m=+1115.743447419" observedRunningTime="2025-12-02 20:30:36.170873906 +0000 UTC m=+1119.174249460" watchObservedRunningTime="2025-12-02 20:30:36.174935185 +0000 UTC m=+1119.178310719" Dec 02 20:30:37 crc kubenswrapper[4796]: I1202 20:30:37.748503 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Dec 02 20:30:41 crc kubenswrapper[4796]: I1202 20:30:41.294640 4796 patch_prober.go:28] interesting pod/console-57c46bb884-zwq4z container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/health\": dial tcp 10.217.0.48:8443: connect: connection refused" start-of-body= Dec 02 20:30:41 crc kubenswrapper[4796]: I1202 20:30:41.295682 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-57c46bb884-zwq4z" podUID="bcaddfd6-aacd-4eae-a723-2837f69c9ecd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.48:8443/health\": dial tcp 10.217.0.48:8443: connect: connection refused" Dec 02 20:30:43 crc kubenswrapper[4796]: E1202 20:30:43.458572 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b" Dec 02 20:30:43 crc kubenswrapper[4796]: E1202 20:30:43.460056 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v2hhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_watcher-kuttl-default(adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 20:30:43 crc kubenswrapper[4796]: I1202 20:30:43.981626 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57c46bb884-zwq4z_bcaddfd6-aacd-4eae-a723-2837f69c9ecd/console/0.log" Dec 02 20:30:43 crc kubenswrapper[4796]: I1202 20:30:43.982157 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.065413 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-service-ca\") pod \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.065460 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-oauth-config\") pod \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.065551 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-config\") pod \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.065574 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-trusted-ca-bundle\") pod \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.065593 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-oauth-serving-cert\") pod \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.065620 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkddz\" (UniqueName: \"kubernetes.io/projected/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-kube-api-access-kkddz\") pod \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.065779 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-serving-cert\") pod \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\" (UID: \"bcaddfd6-aacd-4eae-a723-2837f69c9ecd\") " Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.066291 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bcaddfd6-aacd-4eae-a723-2837f69c9ecd" (UID: "bcaddfd6-aacd-4eae-a723-2837f69c9ecd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.066311 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bcaddfd6-aacd-4eae-a723-2837f69c9ecd" (UID: "bcaddfd6-aacd-4eae-a723-2837f69c9ecd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.066322 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-service-ca" (OuterVolumeSpecName: "service-ca") pod "bcaddfd6-aacd-4eae-a723-2837f69c9ecd" (UID: "bcaddfd6-aacd-4eae-a723-2837f69c9ecd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.066603 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.066616 4796 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.066625 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.066722 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-config" (OuterVolumeSpecName: "console-config") pod "bcaddfd6-aacd-4eae-a723-2837f69c9ecd" (UID: "bcaddfd6-aacd-4eae-a723-2837f69c9ecd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.071419 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bcaddfd6-aacd-4eae-a723-2837f69c9ecd" (UID: "bcaddfd6-aacd-4eae-a723-2837f69c9ecd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.071467 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bcaddfd6-aacd-4eae-a723-2837f69c9ecd" (UID: "bcaddfd6-aacd-4eae-a723-2837f69c9ecd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.071754 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-kube-api-access-kkddz" (OuterVolumeSpecName: "kube-api-access-kkddz") pod "bcaddfd6-aacd-4eae-a723-2837f69c9ecd" (UID: "bcaddfd6-aacd-4eae-a723-2837f69c9ecd"). InnerVolumeSpecName "kube-api-access-kkddz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.169324 4796 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.169390 4796 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.169416 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkddz\" (UniqueName: \"kubernetes.io/projected/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-kube-api-access-kkddz\") on node \"crc\" DevicePath \"\"" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.169442 4796 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcaddfd6-aacd-4eae-a723-2837f69c9ecd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.224346 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57c46bb884-zwq4z_bcaddfd6-aacd-4eae-a723-2837f69c9ecd/console/0.log" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.224457 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57c46bb884-zwq4z" event={"ID":"bcaddfd6-aacd-4eae-a723-2837f69c9ecd","Type":"ContainerDied","Data":"0c4e555e4383ec3729526717449c54e6b54aee1c216f0b780e017539c0a55124"} Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.224504 4796 scope.go:117] "RemoveContainer" containerID="f9fc04b95847febce56605560a410b199af24c474279dcc444da0ab1b5ca491b" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.224545 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57c46bb884-zwq4z" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.229580 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"7c8100d7-1ae6-4220-88d4-527f681270b3","Type":"ContainerStarted","Data":"91ce8727d6b18a7af1b014b56dcc9b79f8e1473a7135e983377ea13e3dc012d1"} Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.276364 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstack-galera-0" podStartSLOduration=17.188456267 podStartE2EDuration="48.276342096s" podCreationTimestamp="2025-12-02 20:29:56 +0000 UTC" firstStartedPulling="2025-12-02 20:29:58.182536978 +0000 UTC m=+1081.185912512" lastFinishedPulling="2025-12-02 20:30:29.270422807 +0000 UTC m=+1112.273798341" observedRunningTime="2025-12-02 20:30:44.262680193 +0000 UTC m=+1127.266055737" watchObservedRunningTime="2025-12-02 20:30:44.276342096 +0000 UTC m=+1127.279717660" Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.292766 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57c46bb884-zwq4z"] Dec 02 20:30:44 crc kubenswrapper[4796]: I1202 20:30:44.302114 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57c46bb884-zwq4z"] Dec 02 20:30:45 crc kubenswrapper[4796]: I1202 20:30:45.275423 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcaddfd6-aacd-4eae-a723-2837f69c9ecd" path="/var/lib/kubelet/pods/bcaddfd6-aacd-4eae-a723-2837f69c9ecd/volumes" Dec 02 20:30:46 crc kubenswrapper[4796]: I1202 20:30:46.254688 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1","Type":"ContainerStarted","Data":"1a76e7004cdaec1766fbe0db00333c7ac280d9858aee130792765181996528d9"} Dec 02 20:30:47 crc kubenswrapper[4796]: I1202 20:30:47.475227 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:30:47 crc kubenswrapper[4796]: I1202 20:30:47.475632 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:30:49 crc kubenswrapper[4796]: I1202 20:30:49.689586 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:30:49 crc kubenswrapper[4796]: I1202 20:30:49.784060 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/openstack-galera-0" Dec 02 20:30:50 crc kubenswrapper[4796]: E1202 20:30:50.400931 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" Dec 02 20:30:51 crc kubenswrapper[4796]: I1202 20:30:51.350602 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1","Type":"ContainerStarted","Data":"7392c85a53b14151a778e3745e06c54f5d8dc44694c60d5a00eb7a1310fb4b95"} Dec 02 20:30:51 crc kubenswrapper[4796]: E1202 20:30:51.356022 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b\\\"\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" Dec 02 20:30:52 crc kubenswrapper[4796]: E1202 20:30:52.363610 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b\\\"\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" Dec 02 20:30:55 crc kubenswrapper[4796]: I1202 20:30:55.189164 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:30:55 crc kubenswrapper[4796]: I1202 20:30:55.189670 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.389978 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-create-6htqp"] Dec 02 20:30:57 crc kubenswrapper[4796]: E1202 20:30:57.390668 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcaddfd6-aacd-4eae-a723-2837f69c9ecd" containerName="console" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.390683 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcaddfd6-aacd-4eae-a723-2837f69c9ecd" containerName="console" Dec 02 20:30:57 crc kubenswrapper[4796]: E1202 20:30:57.390703 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6df2aa-6e9f-449f-8453-a593809f31ba" containerName="collect-profiles" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.390710 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6df2aa-6e9f-449f-8453-a593809f31ba" containerName="collect-profiles" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.390862 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6df2aa-6e9f-449f-8453-a593809f31ba" containerName="collect-profiles" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.390882 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcaddfd6-aacd-4eae-a723-2837f69c9ecd" containerName="console" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.391423 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-6htqp" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.436677 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-6htqp"] Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.479969 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn"] Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.481308 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.487926 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-db-secret" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.491164 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn"] Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.551868 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfee3cc8-9e6e-48fb-a390-2185c467ddf4-operator-scripts\") pod \"keystone-b9ad-account-create-update-b8ltn\" (UID: \"bfee3cc8-9e6e-48fb-a390-2185c467ddf4\") " pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.551935 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f-operator-scripts\") pod \"keystone-db-create-6htqp\" (UID: \"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f\") " pod="watcher-kuttl-default/keystone-db-create-6htqp" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.552127 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l5kd\" (UniqueName: \"kubernetes.io/projected/bfee3cc8-9e6e-48fb-a390-2185c467ddf4-kube-api-access-5l5kd\") pod \"keystone-b9ad-account-create-update-b8ltn\" (UID: \"bfee3cc8-9e6e-48fb-a390-2185c467ddf4\") " pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.552266 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xw8s\" (UniqueName: \"kubernetes.io/projected/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f-kube-api-access-6xw8s\") pod \"keystone-db-create-6htqp\" (UID: \"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f\") " pod="watcher-kuttl-default/keystone-db-create-6htqp" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.653975 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l5kd\" (UniqueName: \"kubernetes.io/projected/bfee3cc8-9e6e-48fb-a390-2185c467ddf4-kube-api-access-5l5kd\") pod \"keystone-b9ad-account-create-update-b8ltn\" (UID: \"bfee3cc8-9e6e-48fb-a390-2185c467ddf4\") " pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.654050 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xw8s\" (UniqueName: \"kubernetes.io/projected/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f-kube-api-access-6xw8s\") pod \"keystone-db-create-6htqp\" (UID: \"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f\") " pod="watcher-kuttl-default/keystone-db-create-6htqp" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.654243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfee3cc8-9e6e-48fb-a390-2185c467ddf4-operator-scripts\") pod \"keystone-b9ad-account-create-update-b8ltn\" (UID: \"bfee3cc8-9e6e-48fb-a390-2185c467ddf4\") " pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.654402 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f-operator-scripts\") pod \"keystone-db-create-6htqp\" (UID: \"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f\") " pod="watcher-kuttl-default/keystone-db-create-6htqp" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.655370 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfee3cc8-9e6e-48fb-a390-2185c467ddf4-operator-scripts\") pod \"keystone-b9ad-account-create-update-b8ltn\" (UID: \"bfee3cc8-9e6e-48fb-a390-2185c467ddf4\") " pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.655689 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f-operator-scripts\") pod \"keystone-db-create-6htqp\" (UID: \"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f\") " pod="watcher-kuttl-default/keystone-db-create-6htqp" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.674781 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xw8s\" (UniqueName: \"kubernetes.io/projected/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f-kube-api-access-6xw8s\") pod \"keystone-db-create-6htqp\" (UID: \"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f\") " pod="watcher-kuttl-default/keystone-db-create-6htqp" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.676489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l5kd\" (UniqueName: \"kubernetes.io/projected/bfee3cc8-9e6e-48fb-a390-2185c467ddf4-kube-api-access-5l5kd\") pod \"keystone-b9ad-account-create-update-b8ltn\" (UID: \"bfee3cc8-9e6e-48fb-a390-2185c467ddf4\") " pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.747293 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-6htqp" Dec 02 20:30:57 crc kubenswrapper[4796]: I1202 20:30:57.806082 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" Dec 02 20:30:58 crc kubenswrapper[4796]: I1202 20:30:58.239516 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-6htqp"] Dec 02 20:30:58 crc kubenswrapper[4796]: I1202 20:30:58.346844 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn"] Dec 02 20:30:58 crc kubenswrapper[4796]: W1202 20:30:58.354191 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfee3cc8_9e6e_48fb_a390_2185c467ddf4.slice/crio-b60d2ef775437225f7453e98b751857d589ae45d3a6b6905a3a96c02ce588fe7 WatchSource:0}: Error finding container b60d2ef775437225f7453e98b751857d589ae45d3a6b6905a3a96c02ce588fe7: Status 404 returned error can't find the container with id b60d2ef775437225f7453e98b751857d589ae45d3a6b6905a3a96c02ce588fe7 Dec 02 20:30:58 crc kubenswrapper[4796]: I1202 20:30:58.456631 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" event={"ID":"bfee3cc8-9e6e-48fb-a390-2185c467ddf4","Type":"ContainerStarted","Data":"b60d2ef775437225f7453e98b751857d589ae45d3a6b6905a3a96c02ce588fe7"} Dec 02 20:30:58 crc kubenswrapper[4796]: I1202 20:30:58.460033 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-6htqp" event={"ID":"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f","Type":"ContainerStarted","Data":"3e60d35110288db9fd407ced3fa4828de0b32d65473028634fcc65d3d7331e7b"} Dec 02 20:30:59 crc kubenswrapper[4796]: I1202 20:30:59.470413 4796 generic.go:334] "Generic (PLEG): container finished" podID="0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f" containerID="9734d9b2e03e215354b7d81cb7bc500e14e05ef81fad81d3e5c6148055876e2b" exitCode=0 Dec 02 20:30:59 crc kubenswrapper[4796]: I1202 20:30:59.470470 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-6htqp" event={"ID":"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f","Type":"ContainerDied","Data":"9734d9b2e03e215354b7d81cb7bc500e14e05ef81fad81d3e5c6148055876e2b"} Dec 02 20:30:59 crc kubenswrapper[4796]: I1202 20:30:59.472310 4796 generic.go:334] "Generic (PLEG): container finished" podID="bfee3cc8-9e6e-48fb-a390-2185c467ddf4" containerID="4c50c366c50376ce0af10dbf1d28f82a1ce485ff4bd6f11ab275566227cc3db9" exitCode=0 Dec 02 20:30:59 crc kubenswrapper[4796]: I1202 20:30:59.472347 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" event={"ID":"bfee3cc8-9e6e-48fb-a390-2185c467ddf4","Type":"ContainerDied","Data":"4c50c366c50376ce0af10dbf1d28f82a1ce485ff4bd6f11ab275566227cc3db9"} Dec 02 20:31:00 crc kubenswrapper[4796]: I1202 20:31:00.488056 4796 generic.go:334] "Generic (PLEG): container finished" podID="17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2" containerID="4a2d5fc8422b53091db0e40142071aed035039d662e01fd69453572d3b77ca68" exitCode=0 Dec 02 20:31:00 crc kubenswrapper[4796]: I1202 20:31:00.488144 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2","Type":"ContainerDied","Data":"4a2d5fc8422b53091db0e40142071aed035039d662e01fd69453572d3b77ca68"} Dec 02 20:31:00 crc kubenswrapper[4796]: I1202 20:31:00.942499 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-6htqp" Dec 02 20:31:00 crc kubenswrapper[4796]: I1202 20:31:00.960095 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.055773 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xw8s\" (UniqueName: \"kubernetes.io/projected/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f-kube-api-access-6xw8s\") pod \"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f\" (UID: \"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f\") " Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.056162 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l5kd\" (UniqueName: \"kubernetes.io/projected/bfee3cc8-9e6e-48fb-a390-2185c467ddf4-kube-api-access-5l5kd\") pod \"bfee3cc8-9e6e-48fb-a390-2185c467ddf4\" (UID: \"bfee3cc8-9e6e-48fb-a390-2185c467ddf4\") " Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.056304 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f-operator-scripts\") pod \"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f\" (UID: \"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f\") " Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.056489 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfee3cc8-9e6e-48fb-a390-2185c467ddf4-operator-scripts\") pod \"bfee3cc8-9e6e-48fb-a390-2185c467ddf4\" (UID: \"bfee3cc8-9e6e-48fb-a390-2185c467ddf4\") " Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.057156 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfee3cc8-9e6e-48fb-a390-2185c467ddf4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bfee3cc8-9e6e-48fb-a390-2185c467ddf4" (UID: "bfee3cc8-9e6e-48fb-a390-2185c467ddf4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.057650 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f" (UID: "0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.059484 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfee3cc8-9e6e-48fb-a390-2185c467ddf4-kube-api-access-5l5kd" (OuterVolumeSpecName: "kube-api-access-5l5kd") pod "bfee3cc8-9e6e-48fb-a390-2185c467ddf4" (UID: "bfee3cc8-9e6e-48fb-a390-2185c467ddf4"). InnerVolumeSpecName "kube-api-access-5l5kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.059719 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f-kube-api-access-6xw8s" (OuterVolumeSpecName: "kube-api-access-6xw8s") pod "0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f" (UID: "0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f"). InnerVolumeSpecName "kube-api-access-6xw8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.158827 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfee3cc8-9e6e-48fb-a390-2185c467ddf4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.158874 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xw8s\" (UniqueName: \"kubernetes.io/projected/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f-kube-api-access-6xw8s\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.158893 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l5kd\" (UniqueName: \"kubernetes.io/projected/bfee3cc8-9e6e-48fb-a390-2185c467ddf4-kube-api-access-5l5kd\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.158905 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.529620 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2","Type":"ContainerStarted","Data":"fffa67797f844c0ab87c0c05270dc610421aefc468b9ba347b9ae2e1466847ee"} Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.531519 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.532392 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-6htqp" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.532385 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-6htqp" event={"ID":"0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f","Type":"ContainerDied","Data":"3e60d35110288db9fd407ced3fa4828de0b32d65473028634fcc65d3d7331e7b"} Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.532486 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e60d35110288db9fd407ced3fa4828de0b32d65473028634fcc65d3d7331e7b" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.534372 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" event={"ID":"bfee3cc8-9e6e-48fb-a390-2185c467ddf4","Type":"ContainerDied","Data":"b60d2ef775437225f7453e98b751857d589ae45d3a6b6905a3a96c02ce588fe7"} Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.534402 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.534404 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b60d2ef775437225f7453e98b751857d589ae45d3a6b6905a3a96c02ce588fe7" Dec 02 20:31:01 crc kubenswrapper[4796]: I1202 20:31:01.557126 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-server-0" podStartSLOduration=37.370326998 podStartE2EDuration="1m7.557106482s" podCreationTimestamp="2025-12-02 20:29:54 +0000 UTC" firstStartedPulling="2025-12-02 20:29:56.518270949 +0000 UTC m=+1079.521646483" lastFinishedPulling="2025-12-02 20:30:26.705050413 +0000 UTC m=+1109.708425967" observedRunningTime="2025-12-02 20:31:01.556722903 +0000 UTC m=+1144.560098457" watchObservedRunningTime="2025-12-02 20:31:01.557106482 +0000 UTC m=+1144.560482016" Dec 02 20:31:03 crc kubenswrapper[4796]: I1202 20:31:03.552400 4796 generic.go:334] "Generic (PLEG): container finished" podID="0add539b-c51e-4616-9235-12465a2e5ecb" containerID="05c9d91658391647a4c84ecf0db296c8f5df38fd027cdcf46ed199babba8233f" exitCode=0 Dec 02 20:31:03 crc kubenswrapper[4796]: I1202 20:31:03.552673 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"0add539b-c51e-4616-9235-12465a2e5ecb","Type":"ContainerDied","Data":"05c9d91658391647a4c84ecf0db296c8f5df38fd027cdcf46ed199babba8233f"} Dec 02 20:31:04 crc kubenswrapper[4796]: I1202 20:31:04.565920 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"0add539b-c51e-4616-9235-12465a2e5ecb","Type":"ContainerStarted","Data":"7ce9e82cf0339366ec3d23bfa2d000dea85a7045920fdd2188cfc97e03303682"} Dec 02 20:31:04 crc kubenswrapper[4796]: I1202 20:31:04.566850 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:31:04 crc kubenswrapper[4796]: I1202 20:31:04.598357 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=-9223371966.256449 podStartE2EDuration="1m10.598326346s" podCreationTimestamp="2025-12-02 20:29:54 +0000 UTC" firstStartedPulling="2025-12-02 20:29:56.71085665 +0000 UTC m=+1079.714232204" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:31:04.591697225 +0000 UTC m=+1147.595072799" watchObservedRunningTime="2025-12-02 20:31:04.598326346 +0000 UTC m=+1147.601701930" Dec 02 20:31:08 crc kubenswrapper[4796]: I1202 20:31:08.606306 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1","Type":"ContainerStarted","Data":"703221751a176783fd13fd0e9d1beaaca81acedaca751b6309a0c67e5efc0a24"} Dec 02 20:31:08 crc kubenswrapper[4796]: I1202 20:31:08.642335 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=4.530936168 podStartE2EDuration="1m10.642312529s" podCreationTimestamp="2025-12-02 20:29:58 +0000 UTC" firstStartedPulling="2025-12-02 20:30:01.901626253 +0000 UTC m=+1084.905001787" lastFinishedPulling="2025-12-02 20:31:08.013002614 +0000 UTC m=+1151.016378148" observedRunningTime="2025-12-02 20:31:08.638166318 +0000 UTC m=+1151.641541902" watchObservedRunningTime="2025-12-02 20:31:08.642312529 +0000 UTC m=+1151.645688073" Dec 02 20:31:09 crc kubenswrapper[4796]: I1202 20:31:09.972583 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:14 crc kubenswrapper[4796]: I1202 20:31:14.972960 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:14 crc kubenswrapper[4796]: I1202 20:31:14.977163 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:15 crc kubenswrapper[4796]: I1202 20:31:15.680788 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.019535 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-server-0" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.277480 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.739703 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-sync-gldrd"] Dec 02 20:31:16 crc kubenswrapper[4796]: E1202 20:31:16.740056 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f" containerName="mariadb-database-create" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.740073 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f" containerName="mariadb-database-create" Dec 02 20:31:16 crc kubenswrapper[4796]: E1202 20:31:16.740081 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfee3cc8-9e6e-48fb-a390-2185c467ddf4" containerName="mariadb-account-create-update" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.740088 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfee3cc8-9e6e-48fb-a390-2185c467ddf4" containerName="mariadb-account-create-update" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.740237 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfee3cc8-9e6e-48fb-a390-2185c467ddf4" containerName="mariadb-account-create-update" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.740272 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f" containerName="mariadb-database-create" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.740820 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.743066 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.743303 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.743388 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-fj42t" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.744627 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.760647 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-gldrd"] Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.828670 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fca6606-3b10-4323-99f7-1baae97f5477-config-data\") pod \"keystone-db-sync-gldrd\" (UID: \"3fca6606-3b10-4323-99f7-1baae97f5477\") " pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.828775 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fca6606-3b10-4323-99f7-1baae97f5477-combined-ca-bundle\") pod \"keystone-db-sync-gldrd\" (UID: \"3fca6606-3b10-4323-99f7-1baae97f5477\") " pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.828831 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9f2p\" (UniqueName: \"kubernetes.io/projected/3fca6606-3b10-4323-99f7-1baae97f5477-kube-api-access-q9f2p\") pod \"keystone-db-sync-gldrd\" (UID: \"3fca6606-3b10-4323-99f7-1baae97f5477\") " pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.930045 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9f2p\" (UniqueName: \"kubernetes.io/projected/3fca6606-3b10-4323-99f7-1baae97f5477-kube-api-access-q9f2p\") pod \"keystone-db-sync-gldrd\" (UID: \"3fca6606-3b10-4323-99f7-1baae97f5477\") " pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.930136 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fca6606-3b10-4323-99f7-1baae97f5477-config-data\") pod \"keystone-db-sync-gldrd\" (UID: \"3fca6606-3b10-4323-99f7-1baae97f5477\") " pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.930198 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fca6606-3b10-4323-99f7-1baae97f5477-combined-ca-bundle\") pod \"keystone-db-sync-gldrd\" (UID: \"3fca6606-3b10-4323-99f7-1baae97f5477\") " pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.936068 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fca6606-3b10-4323-99f7-1baae97f5477-config-data\") pod \"keystone-db-sync-gldrd\" (UID: \"3fca6606-3b10-4323-99f7-1baae97f5477\") " pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.937032 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fca6606-3b10-4323-99f7-1baae97f5477-combined-ca-bundle\") pod \"keystone-db-sync-gldrd\" (UID: \"3fca6606-3b10-4323-99f7-1baae97f5477\") " pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:16 crc kubenswrapper[4796]: I1202 20:31:16.952522 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9f2p\" (UniqueName: \"kubernetes.io/projected/3fca6606-3b10-4323-99f7-1baae97f5477-kube-api-access-q9f2p\") pod \"keystone-db-sync-gldrd\" (UID: \"3fca6606-3b10-4323-99f7-1baae97f5477\") " pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:17 crc kubenswrapper[4796]: I1202 20:31:17.059119 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:17 crc kubenswrapper[4796]: I1202 20:31:17.622435 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-gldrd"] Dec 02 20:31:17 crc kubenswrapper[4796]: I1202 20:31:17.691937 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-gldrd" event={"ID":"3fca6606-3b10-4323-99f7-1baae97f5477","Type":"ContainerStarted","Data":"1183fb7e7743bc4399739304e8b6c352a87063d6c5214ae4c5374a2acfca8342"} Dec 02 20:31:18 crc kubenswrapper[4796]: I1202 20:31:18.884725 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 20:31:18 crc kubenswrapper[4796]: I1202 20:31:18.886305 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="config-reloader" containerID="cri-o://1a76e7004cdaec1766fbe0db00333c7ac280d9858aee130792765181996528d9" gracePeriod=600 Dec 02 20:31:18 crc kubenswrapper[4796]: I1202 20:31:18.886314 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="prometheus" containerID="cri-o://703221751a176783fd13fd0e9d1beaaca81acedaca751b6309a0c67e5efc0a24" gracePeriod=600 Dec 02 20:31:18 crc kubenswrapper[4796]: I1202 20:31:18.886424 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="thanos-sidecar" containerID="cri-o://7392c85a53b14151a778e3745e06c54f5d8dc44694c60d5a00eb7a1310fb4b95" gracePeriod=600 Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.711991 4796 generic.go:334] "Generic (PLEG): container finished" podID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerID="703221751a176783fd13fd0e9d1beaaca81acedaca751b6309a0c67e5efc0a24" exitCode=0 Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.712022 4796 generic.go:334] "Generic (PLEG): container finished" podID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerID="7392c85a53b14151a778e3745e06c54f5d8dc44694c60d5a00eb7a1310fb4b95" exitCode=0 Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.712031 4796 generic.go:334] "Generic (PLEG): container finished" podID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerID="1a76e7004cdaec1766fbe0db00333c7ac280d9858aee130792765181996528d9" exitCode=0 Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.712053 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1","Type":"ContainerDied","Data":"703221751a176783fd13fd0e9d1beaaca81acedaca751b6309a0c67e5efc0a24"} Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.712079 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1","Type":"ContainerDied","Data":"7392c85a53b14151a778e3745e06c54f5d8dc44694c60d5a00eb7a1310fb4b95"} Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.712090 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1","Type":"ContainerDied","Data":"1a76e7004cdaec1766fbe0db00333c7ac280d9858aee130792765181996528d9"} Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.712104 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1","Type":"ContainerDied","Data":"695cc5bda7f7467f8c82d3f79770417db65af62118732e2628db29d740b6d39a"} Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.712114 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="695cc5bda7f7467f8c82d3f79770417db65af62118732e2628db29d740b6d39a" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.766463 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.874131 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-config\") pod \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.874178 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-prometheus-metric-storage-rulefiles-0\") pod \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.874336 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\") pod \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.874387 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-thanos-prometheus-http-client-file\") pod \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.874413 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-config-out\") pod \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.874483 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-tls-assets\") pod \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.874507 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2hhw\" (UniqueName: \"kubernetes.io/projected/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-kube-api-access-v2hhw\") pod \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.874526 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-web-config\") pod \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\" (UID: \"adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1\") " Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.875218 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" (UID: "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.880453 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-config" (OuterVolumeSpecName: "config") pod "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" (UID: "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.881830 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-config-out" (OuterVolumeSpecName: "config-out") pod "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" (UID: "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.882603 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" (UID: "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.883272 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-kube-api-access-v2hhw" (OuterVolumeSpecName: "kube-api-access-v2hhw") pod "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" (UID: "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1"). InnerVolumeSpecName "kube-api-access-v2hhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.886298 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" (UID: "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.904313 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-web-config" (OuterVolumeSpecName: "web-config") pod "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" (UID: "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.907594 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" (UID: "adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1"). InnerVolumeSpecName "pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.976808 4796 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.977179 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2hhw\" (UniqueName: \"kubernetes.io/projected/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-kube-api-access-v2hhw\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.977193 4796 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-web-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.977205 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.977214 4796 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.977273 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\") on node \"crc\" " Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.977289 4796 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:19 crc kubenswrapper[4796]: I1202 20:31:19.977298 4796 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1-config-out\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.013102 4796 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.013361 4796 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78") on node "crc" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.079348 4796 reconciler_common.go:293] "Volume detached for volume \"pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.723771 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.801811 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.827339 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.836433 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 20:31:20 crc kubenswrapper[4796]: E1202 20:31:20.837013 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="thanos-sidecar" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.837043 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="thanos-sidecar" Dec 02 20:31:20 crc kubenswrapper[4796]: E1202 20:31:20.837065 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="config-reloader" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.837079 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="config-reloader" Dec 02 20:31:20 crc kubenswrapper[4796]: E1202 20:31:20.837119 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="prometheus" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.837135 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="prometheus" Dec 02 20:31:20 crc kubenswrapper[4796]: E1202 20:31:20.837153 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="init-config-reloader" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.837166 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="init-config-reloader" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.837562 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="prometheus" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.837603 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="thanos-sidecar" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.837630 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" containerName="config-reloader" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.840215 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.846156 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-metric-storage-prometheus-svc" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.853137 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.853412 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.854292 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.855526 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-sknx5" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.856126 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.857179 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.866577 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.997334 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.997418 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.997509 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.997542 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.997587 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/94835415-6d5d-492c-9a10-b023803a2978-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.997632 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.997681 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-config\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.997720 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/94835415-6d5d-492c-9a10-b023803a2978-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.997768 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.997852 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-575hc\" (UniqueName: \"kubernetes.io/projected/94835415-6d5d-492c-9a10-b023803a2978-kube-api-access-575hc\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:20 crc kubenswrapper[4796]: I1202 20:31:20.997969 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/94835415-6d5d-492c-9a10-b023803a2978-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.099018 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.099072 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.099109 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.099131 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.099159 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/94835415-6d5d-492c-9a10-b023803a2978-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.099184 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.099219 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-config\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.099241 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/94835415-6d5d-492c-9a10-b023803a2978-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.099282 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.099302 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-575hc\" (UniqueName: \"kubernetes.io/projected/94835415-6d5d-492c-9a10-b023803a2978-kube-api-access-575hc\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.099335 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/94835415-6d5d-492c-9a10-b023803a2978-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.100823 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/94835415-6d5d-492c-9a10-b023803a2978-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.104032 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.104489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.110871 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-config\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.111374 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.113498 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/94835415-6d5d-492c-9a10-b023803a2978-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.114722 4796 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.114742 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d6a10f67185551a5fa7e32cf10383bff9b63c518132820d4404eb264d6f6191b/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.115386 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/94835415-6d5d-492c-9a10-b023803a2978-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.123747 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-575hc\" (UniqueName: \"kubernetes.io/projected/94835415-6d5d-492c-9a10-b023803a2978-kube-api-access-575hc\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.123992 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.126508 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/94835415-6d5d-492c-9a10-b023803a2978-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.191695 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0fc788-d8ad-4ecd-a0f3-abf59e50af78\") pod \"prometheus-metric-storage-0\" (UID: \"94835415-6d5d-492c-9a10-b023803a2978\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.279605 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1" path="/var/lib/kubelet/pods/adfbe1bd-5da9-4d9c-86d6-7dcfe9a5fec1/volumes" Dec 02 20:31:21 crc kubenswrapper[4796]: I1202 20:31:21.457059 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:25 crc kubenswrapper[4796]: I1202 20:31:25.189077 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:31:25 crc kubenswrapper[4796]: I1202 20:31:25.189391 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:31:25 crc kubenswrapper[4796]: I1202 20:31:25.189429 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:31:25 crc kubenswrapper[4796]: I1202 20:31:25.189898 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81e0968c57ec6d9b11845db69e201783fcaa5e46b23de5768e000d45505c4ab7"} pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 20:31:25 crc kubenswrapper[4796]: I1202 20:31:25.189943 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" containerID="cri-o://81e0968c57ec6d9b11845db69e201783fcaa5e46b23de5768e000d45505c4ab7" gracePeriod=600 Dec 02 20:31:25 crc kubenswrapper[4796]: I1202 20:31:25.798242 4796 generic.go:334] "Generic (PLEG): container finished" podID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerID="81e0968c57ec6d9b11845db69e201783fcaa5e46b23de5768e000d45505c4ab7" exitCode=0 Dec 02 20:31:25 crc kubenswrapper[4796]: I1202 20:31:25.798333 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerDied","Data":"81e0968c57ec6d9b11845db69e201783fcaa5e46b23de5768e000d45505c4ab7"} Dec 02 20:31:25 crc kubenswrapper[4796]: I1202 20:31:25.798670 4796 scope.go:117] "RemoveContainer" containerID="8d6f0c135e487e19c4f958756870ce83ded4504e5b54dacbb97a36ee8b0a0032" Dec 02 20:31:27 crc kubenswrapper[4796]: I1202 20:31:27.118511 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Dec 02 20:31:27 crc kubenswrapper[4796]: W1202 20:31:27.121868 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94835415_6d5d_492c_9a10_b023803a2978.slice/crio-c715e4e852b4b97d1e86cfa40cad14b5de8bca2552b2f26aecbd5312901983d5 WatchSource:0}: Error finding container c715e4e852b4b97d1e86cfa40cad14b5de8bca2552b2f26aecbd5312901983d5: Status 404 returned error can't find the container with id c715e4e852b4b97d1e86cfa40cad14b5de8bca2552b2f26aecbd5312901983d5 Dec 02 20:31:27 crc kubenswrapper[4796]: I1202 20:31:27.867894 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerStarted","Data":"479b03b0d22f42a532c48eb369a41ad10eb068f11b5f4600bf6355106af1f04c"} Dec 02 20:31:27 crc kubenswrapper[4796]: I1202 20:31:27.870173 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-gldrd" event={"ID":"3fca6606-3b10-4323-99f7-1baae97f5477","Type":"ContainerStarted","Data":"7d5e160776ab02ecd68df753deb5b629277c3a4965ad23fd4851b84b7255ea64"} Dec 02 20:31:27 crc kubenswrapper[4796]: I1202 20:31:27.871443 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"94835415-6d5d-492c-9a10-b023803a2978","Type":"ContainerStarted","Data":"c715e4e852b4b97d1e86cfa40cad14b5de8bca2552b2f26aecbd5312901983d5"} Dec 02 20:31:27 crc kubenswrapper[4796]: I1202 20:31:27.929995 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-db-sync-gldrd" podStartSLOduration=2.990707557 podStartE2EDuration="11.929966672s" podCreationTimestamp="2025-12-02 20:31:16 +0000 UTC" firstStartedPulling="2025-12-02 20:31:17.643510786 +0000 UTC m=+1160.646886320" lastFinishedPulling="2025-12-02 20:31:26.582769901 +0000 UTC m=+1169.586145435" observedRunningTime="2025-12-02 20:31:27.920766398 +0000 UTC m=+1170.924141942" watchObservedRunningTime="2025-12-02 20:31:27.929966672 +0000 UTC m=+1170.933342236" Dec 02 20:31:30 crc kubenswrapper[4796]: I1202 20:31:30.895360 4796 generic.go:334] "Generic (PLEG): container finished" podID="3fca6606-3b10-4323-99f7-1baae97f5477" containerID="7d5e160776ab02ecd68df753deb5b629277c3a4965ad23fd4851b84b7255ea64" exitCode=0 Dec 02 20:31:30 crc kubenswrapper[4796]: I1202 20:31:30.895435 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-gldrd" event={"ID":"3fca6606-3b10-4323-99f7-1baae97f5477","Type":"ContainerDied","Data":"7d5e160776ab02ecd68df753deb5b629277c3a4965ad23fd4851b84b7255ea64"} Dec 02 20:31:30 crc kubenswrapper[4796]: I1202 20:31:30.897356 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"94835415-6d5d-492c-9a10-b023803a2978","Type":"ContainerStarted","Data":"7e4fc8c5dde2222b166ef0de2b73ed59b3d37e0a048a796c7b28b6905b241e24"} Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.297984 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.487029 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fca6606-3b10-4323-99f7-1baae97f5477-combined-ca-bundle\") pod \"3fca6606-3b10-4323-99f7-1baae97f5477\" (UID: \"3fca6606-3b10-4323-99f7-1baae97f5477\") " Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.487110 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fca6606-3b10-4323-99f7-1baae97f5477-config-data\") pod \"3fca6606-3b10-4323-99f7-1baae97f5477\" (UID: \"3fca6606-3b10-4323-99f7-1baae97f5477\") " Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.487148 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9f2p\" (UniqueName: \"kubernetes.io/projected/3fca6606-3b10-4323-99f7-1baae97f5477-kube-api-access-q9f2p\") pod \"3fca6606-3b10-4323-99f7-1baae97f5477\" (UID: \"3fca6606-3b10-4323-99f7-1baae97f5477\") " Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.496464 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fca6606-3b10-4323-99f7-1baae97f5477-kube-api-access-q9f2p" (OuterVolumeSpecName: "kube-api-access-q9f2p") pod "3fca6606-3b10-4323-99f7-1baae97f5477" (UID: "3fca6606-3b10-4323-99f7-1baae97f5477"). InnerVolumeSpecName "kube-api-access-q9f2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.523889 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fca6606-3b10-4323-99f7-1baae97f5477-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fca6606-3b10-4323-99f7-1baae97f5477" (UID: "3fca6606-3b10-4323-99f7-1baae97f5477"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.560638 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fca6606-3b10-4323-99f7-1baae97f5477-config-data" (OuterVolumeSpecName: "config-data") pod "3fca6606-3b10-4323-99f7-1baae97f5477" (UID: "3fca6606-3b10-4323-99f7-1baae97f5477"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.589126 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fca6606-3b10-4323-99f7-1baae97f5477-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.589164 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fca6606-3b10-4323-99f7-1baae97f5477-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.589174 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9f2p\" (UniqueName: \"kubernetes.io/projected/3fca6606-3b10-4323-99f7-1baae97f5477-kube-api-access-q9f2p\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.919775 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-gldrd" event={"ID":"3fca6606-3b10-4323-99f7-1baae97f5477","Type":"ContainerDied","Data":"1183fb7e7743bc4399739304e8b6c352a87063d6c5214ae4c5374a2acfca8342"} Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.919831 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1183fb7e7743bc4399739304e8b6c352a87063d6c5214ae4c5374a2acfca8342" Dec 02 20:31:32 crc kubenswrapper[4796]: I1202 20:31:32.919876 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-gldrd" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.133982 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-hnc9z"] Dec 02 20:31:33 crc kubenswrapper[4796]: E1202 20:31:33.134440 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fca6606-3b10-4323-99f7-1baae97f5477" containerName="keystone-db-sync" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.134463 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fca6606-3b10-4323-99f7-1baae97f5477" containerName="keystone-db-sync" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.134659 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fca6606-3b10-4323-99f7-1baae97f5477" containerName="keystone-db-sync" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.135347 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.141554 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.141849 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.142144 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.142308 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-fj42t" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.142552 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.164652 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-hnc9z"] Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.300731 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-config-data\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.300783 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-credential-keys\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.300831 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-combined-ca-bundle\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.300863 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-scripts\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.301049 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-fernet-keys\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.301331 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2nn\" (UniqueName: \"kubernetes.io/projected/839710fa-92cd-45bd-9cac-30cd24d50772-kube-api-access-8s2nn\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.333512 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.335936 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.338755 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.341633 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.357469 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.409387 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-config-data\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.409453 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-credential-keys\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.409515 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-combined-ca-bundle\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.409550 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-scripts\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.409597 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-fernet-keys\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.409697 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s2nn\" (UniqueName: \"kubernetes.io/projected/839710fa-92cd-45bd-9cac-30cd24d50772-kube-api-access-8s2nn\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.419747 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-credential-keys\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.420854 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-fernet-keys\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.421008 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-scripts\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.421225 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-config-data\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.421837 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-combined-ca-bundle\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.444864 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s2nn\" (UniqueName: \"kubernetes.io/projected/839710fa-92cd-45bd-9cac-30cd24d50772-kube-api-access-8s2nn\") pod \"keystone-bootstrap-hnc9z\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.460002 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.515425 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.515470 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.515498 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gbhm\" (UniqueName: \"kubernetes.io/projected/fec67e54-0540-45d4-b314-bd3ffc476b69-kube-api-access-4gbhm\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.515538 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-scripts\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.515563 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-config-data\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.515599 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fec67e54-0540-45d4-b314-bd3ffc476b69-run-httpd\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.515638 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fec67e54-0540-45d4-b314-bd3ffc476b69-log-httpd\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.642154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.642220 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.642275 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gbhm\" (UniqueName: \"kubernetes.io/projected/fec67e54-0540-45d4-b314-bd3ffc476b69-kube-api-access-4gbhm\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.642317 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-scripts\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.642375 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-config-data\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.642457 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fec67e54-0540-45d4-b314-bd3ffc476b69-run-httpd\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.642493 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fec67e54-0540-45d4-b314-bd3ffc476b69-log-httpd\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.644694 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fec67e54-0540-45d4-b314-bd3ffc476b69-run-httpd\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.645803 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fec67e54-0540-45d4-b314-bd3ffc476b69-log-httpd\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.649871 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-config-data\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.651946 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.659411 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-scripts\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.661943 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.664610 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gbhm\" (UniqueName: \"kubernetes.io/projected/fec67e54-0540-45d4-b314-bd3ffc476b69-kube-api-access-4gbhm\") pod \"ceilometer-0\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.678874 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.812458 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-hnc9z"] Dec 02 20:31:33 crc kubenswrapper[4796]: I1202 20:31:33.938898 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" event={"ID":"839710fa-92cd-45bd-9cac-30cd24d50772","Type":"ContainerStarted","Data":"de8d73452536d448c971e5df73abf1cc548eca566c326873367b2a605f4162b9"} Dec 02 20:31:34 crc kubenswrapper[4796]: I1202 20:31:34.208337 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:31:34 crc kubenswrapper[4796]: W1202 20:31:34.211850 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfec67e54_0540_45d4_b314_bd3ffc476b69.slice/crio-1fc480199dd9151e88069951268fad6be1f97db3ca065b6180abce17ef67fb32 WatchSource:0}: Error finding container 1fc480199dd9151e88069951268fad6be1f97db3ca065b6180abce17ef67fb32: Status 404 returned error can't find the container with id 1fc480199dd9151e88069951268fad6be1f97db3ca065b6180abce17ef67fb32 Dec 02 20:31:34 crc kubenswrapper[4796]: I1202 20:31:34.959109 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fec67e54-0540-45d4-b314-bd3ffc476b69","Type":"ContainerStarted","Data":"1fc480199dd9151e88069951268fad6be1f97db3ca065b6180abce17ef67fb32"} Dec 02 20:31:34 crc kubenswrapper[4796]: I1202 20:31:34.966034 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" event={"ID":"839710fa-92cd-45bd-9cac-30cd24d50772","Type":"ContainerStarted","Data":"b935eb123366328babe6d3481a7516c1e8f6803b67c07915c61ca84a3783e547"} Dec 02 20:31:35 crc kubenswrapper[4796]: I1202 20:31:35.332109 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" podStartSLOduration=2.33208577 podStartE2EDuration="2.33208577s" podCreationTimestamp="2025-12-02 20:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:31:34.989526001 +0000 UTC m=+1177.992901555" watchObservedRunningTime="2025-12-02 20:31:35.33208577 +0000 UTC m=+1178.335461304" Dec 02 20:31:35 crc kubenswrapper[4796]: I1202 20:31:35.341652 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:31:38 crc kubenswrapper[4796]: I1202 20:31:38.006801 4796 generic.go:334] "Generic (PLEG): container finished" podID="94835415-6d5d-492c-9a10-b023803a2978" containerID="7e4fc8c5dde2222b166ef0de2b73ed59b3d37e0a048a796c7b28b6905b241e24" exitCode=0 Dec 02 20:31:38 crc kubenswrapper[4796]: I1202 20:31:38.006887 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"94835415-6d5d-492c-9a10-b023803a2978","Type":"ContainerDied","Data":"7e4fc8c5dde2222b166ef0de2b73ed59b3d37e0a048a796c7b28b6905b241e24"} Dec 02 20:31:38 crc kubenswrapper[4796]: I1202 20:31:38.008845 4796 generic.go:334] "Generic (PLEG): container finished" podID="839710fa-92cd-45bd-9cac-30cd24d50772" containerID="b935eb123366328babe6d3481a7516c1e8f6803b67c07915c61ca84a3783e547" exitCode=0 Dec 02 20:31:38 crc kubenswrapper[4796]: I1202 20:31:38.008883 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" event={"ID":"839710fa-92cd-45bd-9cac-30cd24d50772","Type":"ContainerDied","Data":"b935eb123366328babe6d3481a7516c1e8f6803b67c07915c61ca84a3783e547"} Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.019541 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"94835415-6d5d-492c-9a10-b023803a2978","Type":"ContainerStarted","Data":"21e20b8c36ad4577cd095a5fe5b763753bd59afdc252b46ad24e64709eeaac38"} Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.020755 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fec67e54-0540-45d4-b314-bd3ffc476b69","Type":"ContainerStarted","Data":"78086db09ef45fc8c40a640dc406e17d21379bb93eade01ccd3a51840813d145"} Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.345479 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.464020 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-combined-ca-bundle\") pod \"839710fa-92cd-45bd-9cac-30cd24d50772\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.464100 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-credential-keys\") pod \"839710fa-92cd-45bd-9cac-30cd24d50772\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.464152 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s2nn\" (UniqueName: \"kubernetes.io/projected/839710fa-92cd-45bd-9cac-30cd24d50772-kube-api-access-8s2nn\") pod \"839710fa-92cd-45bd-9cac-30cd24d50772\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.464199 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-fernet-keys\") pod \"839710fa-92cd-45bd-9cac-30cd24d50772\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.464240 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-config-data\") pod \"839710fa-92cd-45bd-9cac-30cd24d50772\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.465336 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-scripts\") pod \"839710fa-92cd-45bd-9cac-30cd24d50772\" (UID: \"839710fa-92cd-45bd-9cac-30cd24d50772\") " Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.470622 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-scripts" (OuterVolumeSpecName: "scripts") pod "839710fa-92cd-45bd-9cac-30cd24d50772" (UID: "839710fa-92cd-45bd-9cac-30cd24d50772"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.472307 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "839710fa-92cd-45bd-9cac-30cd24d50772" (UID: "839710fa-92cd-45bd-9cac-30cd24d50772"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.472847 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/839710fa-92cd-45bd-9cac-30cd24d50772-kube-api-access-8s2nn" (OuterVolumeSpecName: "kube-api-access-8s2nn") pod "839710fa-92cd-45bd-9cac-30cd24d50772" (UID: "839710fa-92cd-45bd-9cac-30cd24d50772"). InnerVolumeSpecName "kube-api-access-8s2nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.474890 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "839710fa-92cd-45bd-9cac-30cd24d50772" (UID: "839710fa-92cd-45bd-9cac-30cd24d50772"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.489813 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "839710fa-92cd-45bd-9cac-30cd24d50772" (UID: "839710fa-92cd-45bd-9cac-30cd24d50772"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.493174 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-config-data" (OuterVolumeSpecName: "config-data") pod "839710fa-92cd-45bd-9cac-30cd24d50772" (UID: "839710fa-92cd-45bd-9cac-30cd24d50772"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.567401 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.567448 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.567461 4796 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.567473 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s2nn\" (UniqueName: \"kubernetes.io/projected/839710fa-92cd-45bd-9cac-30cd24d50772-kube-api-access-8s2nn\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.567483 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:39 crc kubenswrapper[4796]: I1202 20:31:39.567492 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839710fa-92cd-45bd-9cac-30cd24d50772-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.043227 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" event={"ID":"839710fa-92cd-45bd-9cac-30cd24d50772","Type":"ContainerDied","Data":"de8d73452536d448c971e5df73abf1cc548eca566c326873367b2a605f4162b9"} Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.043302 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de8d73452536d448c971e5df73abf1cc548eca566c326873367b2a605f4162b9" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.043393 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-hnc9z" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.140790 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-hnc9z"] Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.151424 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-hnc9z"] Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.247045 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-rxxpw"] Dec 02 20:31:40 crc kubenswrapper[4796]: E1202 20:31:40.247961 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="839710fa-92cd-45bd-9cac-30cd24d50772" containerName="keystone-bootstrap" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.247978 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="839710fa-92cd-45bd-9cac-30cd24d50772" containerName="keystone-bootstrap" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.248226 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="839710fa-92cd-45bd-9cac-30cd24d50772" containerName="keystone-bootstrap" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.250076 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.255703 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-fj42t" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.255703 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.255760 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.255896 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.255921 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.282944 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-rxxpw"] Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.380230 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-config-data\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.380314 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-fernet-keys\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.380350 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-scripts\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.380383 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-credential-keys\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.380660 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6k2b\" (UniqueName: \"kubernetes.io/projected/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-kube-api-access-t6k2b\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.381769 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-combined-ca-bundle\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.483904 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-combined-ca-bundle\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.483977 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-config-data\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.484018 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-fernet-keys\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.484373 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-scripts\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.484898 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-credential-keys\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.484924 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6k2b\" (UniqueName: \"kubernetes.io/projected/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-kube-api-access-t6k2b\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.490538 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-combined-ca-bundle\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.490584 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-config-data\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.490983 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-credential-keys\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.491561 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-fernet-keys\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.512548 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6k2b\" (UniqueName: \"kubernetes.io/projected/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-kube-api-access-t6k2b\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.538668 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-scripts\") pod \"keystone-bootstrap-rxxpw\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:40 crc kubenswrapper[4796]: I1202 20:31:40.585688 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:41 crc kubenswrapper[4796]: I1202 20:31:41.053477 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fec67e54-0540-45d4-b314-bd3ffc476b69","Type":"ContainerStarted","Data":"7f8fddc6f418307c1cc48657d984bf069bd5b3ed5e1cebae4d2781cf760ec424"} Dec 02 20:31:41 crc kubenswrapper[4796]: I1202 20:31:41.164758 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-rxxpw"] Dec 02 20:31:41 crc kubenswrapper[4796]: W1202 20:31:41.176702 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bbd64eb_2206_45b9_b6c5_8be80b5ae862.slice/crio-c2c22572479dd4dd81693b383cdfb2b6116c4ce723da6ba8a05a5aeb3913442b WatchSource:0}: Error finding container c2c22572479dd4dd81693b383cdfb2b6116c4ce723da6ba8a05a5aeb3913442b: Status 404 returned error can't find the container with id c2c22572479dd4dd81693b383cdfb2b6116c4ce723da6ba8a05a5aeb3913442b Dec 02 20:31:41 crc kubenswrapper[4796]: I1202 20:31:41.274505 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="839710fa-92cd-45bd-9cac-30cd24d50772" path="/var/lib/kubelet/pods/839710fa-92cd-45bd-9cac-30cd24d50772/volumes" Dec 02 20:31:42 crc kubenswrapper[4796]: I1202 20:31:42.067518 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" event={"ID":"7bbd64eb-2206-45b9-b6c5-8be80b5ae862","Type":"ContainerStarted","Data":"c2c22572479dd4dd81693b383cdfb2b6116c4ce723da6ba8a05a5aeb3913442b"} Dec 02 20:31:43 crc kubenswrapper[4796]: I1202 20:31:43.079205 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" event={"ID":"7bbd64eb-2206-45b9-b6c5-8be80b5ae862","Type":"ContainerStarted","Data":"8efb4c542d0b77bb2aadefbac6d81ecaca7108b0fe67dd73777a32d7ca9b6bde"} Dec 02 20:31:44 crc kubenswrapper[4796]: I1202 20:31:44.102336 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"94835415-6d5d-492c-9a10-b023803a2978","Type":"ContainerStarted","Data":"90fee20c6af4456785854f2ad7f73dca2733c0d100de8eff11c4bee1e43e3c9a"} Dec 02 20:31:44 crc kubenswrapper[4796]: I1202 20:31:44.131952 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" podStartSLOduration=4.131933373 podStartE2EDuration="4.131933373s" podCreationTimestamp="2025-12-02 20:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:31:44.129895743 +0000 UTC m=+1187.133271277" watchObservedRunningTime="2025-12-02 20:31:44.131933373 +0000 UTC m=+1187.135308907" Dec 02 20:31:45 crc kubenswrapper[4796]: I1202 20:31:45.131668 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"94835415-6d5d-492c-9a10-b023803a2978","Type":"ContainerStarted","Data":"4c003ed174182a020c80a67db304aeeb2061e18e4cfc5ae23faa2c19571e0098"} Dec 02 20:31:45 crc kubenswrapper[4796]: I1202 20:31:45.166194 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=25.166172519 podStartE2EDuration="25.166172519s" podCreationTimestamp="2025-12-02 20:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:31:45.157840105 +0000 UTC m=+1188.161215639" watchObservedRunningTime="2025-12-02 20:31:45.166172519 +0000 UTC m=+1188.169548043" Dec 02 20:31:46 crc kubenswrapper[4796]: I1202 20:31:46.458277 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:47 crc kubenswrapper[4796]: I1202 20:31:47.172450 4796 generic.go:334] "Generic (PLEG): container finished" podID="7bbd64eb-2206-45b9-b6c5-8be80b5ae862" containerID="8efb4c542d0b77bb2aadefbac6d81ecaca7108b0fe67dd73777a32d7ca9b6bde" exitCode=0 Dec 02 20:31:47 crc kubenswrapper[4796]: I1202 20:31:47.172533 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" event={"ID":"7bbd64eb-2206-45b9-b6c5-8be80b5ae862","Type":"ContainerDied","Data":"8efb4c542d0b77bb2aadefbac6d81ecaca7108b0fe67dd73777a32d7ca9b6bde"} Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.184821 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fec67e54-0540-45d4-b314-bd3ffc476b69","Type":"ContainerStarted","Data":"5c706a80e9b0ad765029569cc8958a3af9276e7ac77779de713328b86ca9c91b"} Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.599664 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.762473 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-config-data\") pod \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.762539 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6k2b\" (UniqueName: \"kubernetes.io/projected/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-kube-api-access-t6k2b\") pod \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.762569 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-combined-ca-bundle\") pod \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.762618 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-credential-keys\") pod \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.762723 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-fernet-keys\") pod \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.762897 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-scripts\") pod \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\" (UID: \"7bbd64eb-2206-45b9-b6c5-8be80b5ae862\") " Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.787486 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-kube-api-access-t6k2b" (OuterVolumeSpecName: "kube-api-access-t6k2b") pod "7bbd64eb-2206-45b9-b6c5-8be80b5ae862" (UID: "7bbd64eb-2206-45b9-b6c5-8be80b5ae862"). InnerVolumeSpecName "kube-api-access-t6k2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.814158 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-scripts" (OuterVolumeSpecName: "scripts") pod "7bbd64eb-2206-45b9-b6c5-8be80b5ae862" (UID: "7bbd64eb-2206-45b9-b6c5-8be80b5ae862"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.816382 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7bbd64eb-2206-45b9-b6c5-8be80b5ae862" (UID: "7bbd64eb-2206-45b9-b6c5-8be80b5ae862"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.816445 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7bbd64eb-2206-45b9-b6c5-8be80b5ae862" (UID: "7bbd64eb-2206-45b9-b6c5-8be80b5ae862"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.827312 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bbd64eb-2206-45b9-b6c5-8be80b5ae862" (UID: "7bbd64eb-2206-45b9-b6c5-8be80b5ae862"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.828096 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-config-data" (OuterVolumeSpecName: "config-data") pod "7bbd64eb-2206-45b9-b6c5-8be80b5ae862" (UID: "7bbd64eb-2206-45b9-b6c5-8be80b5ae862"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.865048 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.865094 4796 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.865108 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.865117 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.865126 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:48 crc kubenswrapper[4796]: I1202 20:31:48.865136 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6k2b\" (UniqueName: \"kubernetes.io/projected/7bbd64eb-2206-45b9-b6c5-8be80b5ae862-kube-api-access-t6k2b\") on node \"crc\" DevicePath \"\"" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.223739 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" event={"ID":"7bbd64eb-2206-45b9-b6c5-8be80b5ae862","Type":"ContainerDied","Data":"c2c22572479dd4dd81693b383cdfb2b6116c4ce723da6ba8a05a5aeb3913442b"} Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.223814 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2c22572479dd4dd81693b383cdfb2b6116c4ce723da6ba8a05a5aeb3913442b" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.224236 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-rxxpw" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.297852 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2"] Dec 02 20:31:49 crc kubenswrapper[4796]: E1202 20:31:49.298439 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbd64eb-2206-45b9-b6c5-8be80b5ae862" containerName="keystone-bootstrap" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.298557 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbd64eb-2206-45b9-b6c5-8be80b5ae862" containerName="keystone-bootstrap" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.298810 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bbd64eb-2206-45b9-b6c5-8be80b5ae862" containerName="keystone-bootstrap" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.299532 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.303039 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.303120 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-public-svc" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.303053 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-internal-svc" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.303235 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.303314 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-fj42t" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.303447 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.320630 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2"] Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.379226 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-combined-ca-bundle\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.379288 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-credential-keys\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.379397 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-fernet-keys\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.379435 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-scripts\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.379464 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-public-tls-certs\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.379489 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-config-data\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.379596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsflm\" (UniqueName: \"kubernetes.io/projected/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-kube-api-access-bsflm\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.379672 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-internal-tls-certs\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.482905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-scripts\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.483002 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-public-tls-certs\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.483052 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-config-data\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.483083 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsflm\" (UniqueName: \"kubernetes.io/projected/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-kube-api-access-bsflm\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.483146 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-internal-tls-certs\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.483223 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-combined-ca-bundle\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.483279 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-credential-keys\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.483381 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-fernet-keys\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.487983 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-combined-ca-bundle\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.492601 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-config-data\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.492627 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-internal-tls-certs\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.493975 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-fernet-keys\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.497779 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-scripts\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.500229 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsflm\" (UniqueName: \"kubernetes.io/projected/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-kube-api-access-bsflm\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.501721 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-credential-keys\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.502762 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-public-tls-certs\") pod \"keystone-76b5bc4fb5-v2nj2\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:49 crc kubenswrapper[4796]: I1202 20:31:49.621096 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:50 crc kubenswrapper[4796]: I1202 20:31:50.116071 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2"] Dec 02 20:31:50 crc kubenswrapper[4796]: W1202 20:31:50.121959 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d7a0632_8bd8_46c2_9da9_f90d7e6bd8ac.slice/crio-ac472f08b79d22fd20ae45c7daae02f6516acb82aac3450a6040dcac10bb7efa WatchSource:0}: Error finding container ac472f08b79d22fd20ae45c7daae02f6516acb82aac3450a6040dcac10bb7efa: Status 404 returned error can't find the container with id ac472f08b79d22fd20ae45c7daae02f6516acb82aac3450a6040dcac10bb7efa Dec 02 20:31:50 crc kubenswrapper[4796]: I1202 20:31:50.255346 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" event={"ID":"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac","Type":"ContainerStarted","Data":"ac472f08b79d22fd20ae45c7daae02f6516acb82aac3450a6040dcac10bb7efa"} Dec 02 20:31:51 crc kubenswrapper[4796]: I1202 20:31:51.275426 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:31:51 crc kubenswrapper[4796]: I1202 20:31:51.276762 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" event={"ID":"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac","Type":"ContainerStarted","Data":"953318fa009c34af9afc15fdfb8ea644384e81dc112a7334158e887d2d33642f"} Dec 02 20:31:51 crc kubenswrapper[4796]: I1202 20:31:51.297801 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" podStartSLOduration=2.297761257 podStartE2EDuration="2.297761257s" podCreationTimestamp="2025-12-02 20:31:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:31:51.291827813 +0000 UTC m=+1194.295203387" watchObservedRunningTime="2025-12-02 20:31:51.297761257 +0000 UTC m=+1194.301136841" Dec 02 20:31:51 crc kubenswrapper[4796]: I1202 20:31:51.457463 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:51 crc kubenswrapper[4796]: I1202 20:31:51.465835 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:52 crc kubenswrapper[4796]: I1202 20:31:52.280775 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Dec 02 20:31:57 crc kubenswrapper[4796]: I1202 20:31:57.371005 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fec67e54-0540-45d4-b314-bd3ffc476b69","Type":"ContainerStarted","Data":"fdde4ae9bc8377bb0a3947b0fe00d577b985aa8bd1fa89f6e8d1d49d95ba3248"} Dec 02 20:31:57 crc kubenswrapper[4796]: I1202 20:31:57.371397 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="ceilometer-central-agent" containerID="cri-o://78086db09ef45fc8c40a640dc406e17d21379bb93eade01ccd3a51840813d145" gracePeriod=30 Dec 02 20:31:57 crc kubenswrapper[4796]: I1202 20:31:57.371986 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="proxy-httpd" containerID="cri-o://fdde4ae9bc8377bb0a3947b0fe00d577b985aa8bd1fa89f6e8d1d49d95ba3248" gracePeriod=30 Dec 02 20:31:57 crc kubenswrapper[4796]: I1202 20:31:57.371875 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:31:57 crc kubenswrapper[4796]: I1202 20:31:57.372093 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="sg-core" containerID="cri-o://5c706a80e9b0ad765029569cc8958a3af9276e7ac77779de713328b86ca9c91b" gracePeriod=30 Dec 02 20:31:57 crc kubenswrapper[4796]: I1202 20:31:57.372107 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="ceilometer-notification-agent" containerID="cri-o://7f8fddc6f418307c1cc48657d984bf069bd5b3ed5e1cebae4d2781cf760ec424" gracePeriod=30 Dec 02 20:31:57 crc kubenswrapper[4796]: I1202 20:31:57.393483 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.4984222090000001 podStartE2EDuration="24.393445901s" podCreationTimestamp="2025-12-02 20:31:33 +0000 UTC" firstStartedPulling="2025-12-02 20:31:34.214240455 +0000 UTC m=+1177.217615989" lastFinishedPulling="2025-12-02 20:31:57.109264147 +0000 UTC m=+1200.112639681" observedRunningTime="2025-12-02 20:31:57.39260068 +0000 UTC m=+1200.395976214" watchObservedRunningTime="2025-12-02 20:31:57.393445901 +0000 UTC m=+1200.396821435" Dec 02 20:31:58 crc kubenswrapper[4796]: I1202 20:31:58.385215 4796 generic.go:334] "Generic (PLEG): container finished" podID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerID="5c706a80e9b0ad765029569cc8958a3af9276e7ac77779de713328b86ca9c91b" exitCode=2 Dec 02 20:31:58 crc kubenswrapper[4796]: I1202 20:31:58.385301 4796 generic.go:334] "Generic (PLEG): container finished" podID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerID="78086db09ef45fc8c40a640dc406e17d21379bb93eade01ccd3a51840813d145" exitCode=0 Dec 02 20:31:58 crc kubenswrapper[4796]: I1202 20:31:58.385292 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fec67e54-0540-45d4-b314-bd3ffc476b69","Type":"ContainerDied","Data":"5c706a80e9b0ad765029569cc8958a3af9276e7ac77779de713328b86ca9c91b"} Dec 02 20:31:58 crc kubenswrapper[4796]: I1202 20:31:58.385353 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fec67e54-0540-45d4-b314-bd3ffc476b69","Type":"ContainerDied","Data":"78086db09ef45fc8c40a640dc406e17d21379bb93eade01ccd3a51840813d145"} Dec 02 20:32:03 crc kubenswrapper[4796]: I1202 20:32:03.448150 4796 generic.go:334] "Generic (PLEG): container finished" podID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerID="7f8fddc6f418307c1cc48657d984bf069bd5b3ed5e1cebae4d2781cf760ec424" exitCode=0 Dec 02 20:32:03 crc kubenswrapper[4796]: I1202 20:32:03.448275 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fec67e54-0540-45d4-b314-bd3ffc476b69","Type":"ContainerDied","Data":"7f8fddc6f418307c1cc48657d984bf069bd5b3ed5e1cebae4d2781cf760ec424"} Dec 02 20:32:21 crc kubenswrapper[4796]: I1202 20:32:21.217032 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.301205 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.302569 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.306953 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstackclient-openstackclient-dockercfg-p5xzj" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.306972 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-config-secret" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.306972 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.322964 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-openstack-config\") pod \"openstackclient\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.323074 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.323144 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-openstack-config-secret\") pod \"openstackclient\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.323346 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5m6\" (UniqueName: \"kubernetes.io/projected/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-kube-api-access-st5m6\") pod \"openstackclient\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.323668 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.425749 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-openstack-config\") pod \"openstackclient\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.425860 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.425912 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-openstack-config-secret\") pod \"openstackclient\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.425950 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st5m6\" (UniqueName: \"kubernetes.io/projected/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-kube-api-access-st5m6\") pod \"openstackclient\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.427152 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-openstack-config\") pod \"openstackclient\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.432861 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-openstack-config-secret\") pod \"openstackclient\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.432875 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.443169 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st5m6\" (UniqueName: \"kubernetes.io/projected/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-kube-api-access-st5m6\") pod \"openstackclient\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.496058 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/openstackclient"] Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.497332 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.505802 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/openstackclient"] Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.553415 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.554574 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.560778 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.629212 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40949cd7-9274-47e5-bf48-306fed2c0360-combined-ca-bundle\") pod \"openstackclient\" (UID: \"40949cd7-9274-47e5-bf48-306fed2c0360\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.629502 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/40949cd7-9274-47e5-bf48-306fed2c0360-openstack-config-secret\") pod \"openstackclient\" (UID: \"40949cd7-9274-47e5-bf48-306fed2c0360\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.629679 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffl8p\" (UniqueName: \"kubernetes.io/projected/40949cd7-9274-47e5-bf48-306fed2c0360-kube-api-access-ffl8p\") pod \"openstackclient\" (UID: \"40949cd7-9274-47e5-bf48-306fed2c0360\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.630033 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/40949cd7-9274-47e5-bf48-306fed2c0360-openstack-config\") pod \"openstackclient\" (UID: \"40949cd7-9274-47e5-bf48-306fed2c0360\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: E1202 20:32:22.650657 4796 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 02 20:32:22 crc kubenswrapper[4796]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_watcher-kuttl-default_9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b_0(ec643788929327fb8ff3989446a5b6f021d544029c33d0320a0a7cfc61d601ab): error adding pod watcher-kuttl-default_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ec643788929327fb8ff3989446a5b6f021d544029c33d0320a0a7cfc61d601ab" Netns:"/var/run/netns/d5c641d6-22e2-45fb-9675-45f304d2d54e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=watcher-kuttl-default;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=ec643788929327fb8ff3989446a5b6f021d544029c33d0320a0a7cfc61d601ab;K8S_POD_UID=9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b" Path:"" ERRORED: error configuring pod [watcher-kuttl-default/openstackclient] networking: Multus: [watcher-kuttl-default/openstackclient/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b]: expected pod UID "9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b" but got "40949cd7-9274-47e5-bf48-306fed2c0360" from Kube API Dec 02 20:32:22 crc kubenswrapper[4796]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 02 20:32:22 crc kubenswrapper[4796]: > Dec 02 20:32:22 crc kubenswrapper[4796]: E1202 20:32:22.650742 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 02 20:32:22 crc kubenswrapper[4796]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_watcher-kuttl-default_9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b_0(ec643788929327fb8ff3989446a5b6f021d544029c33d0320a0a7cfc61d601ab): error adding pod watcher-kuttl-default_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ec643788929327fb8ff3989446a5b6f021d544029c33d0320a0a7cfc61d601ab" Netns:"/var/run/netns/d5c641d6-22e2-45fb-9675-45f304d2d54e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=watcher-kuttl-default;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=ec643788929327fb8ff3989446a5b6f021d544029c33d0320a0a7cfc61d601ab;K8S_POD_UID=9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b" Path:"" ERRORED: error configuring pod [watcher-kuttl-default/openstackclient] networking: Multus: [watcher-kuttl-default/openstackclient/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b]: expected pod UID "9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b" but got "40949cd7-9274-47e5-bf48-306fed2c0360" from Kube API Dec 02 20:32:22 crc kubenswrapper[4796]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 02 20:32:22 crc kubenswrapper[4796]: > pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.675598 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.683730 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b" podUID="40949cd7-9274-47e5-bf48-306fed2c0360" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.687113 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.731536 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-openstack-config-secret\") pod \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.731657 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st5m6\" (UniqueName: \"kubernetes.io/projected/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-kube-api-access-st5m6\") pod \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.731743 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-combined-ca-bundle\") pod \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.731800 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-openstack-config\") pod \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\" (UID: \"9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b\") " Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.731965 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffl8p\" (UniqueName: \"kubernetes.io/projected/40949cd7-9274-47e5-bf48-306fed2c0360-kube-api-access-ffl8p\") pod \"openstackclient\" (UID: \"40949cd7-9274-47e5-bf48-306fed2c0360\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.732048 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/40949cd7-9274-47e5-bf48-306fed2c0360-openstack-config\") pod \"openstackclient\" (UID: \"40949cd7-9274-47e5-bf48-306fed2c0360\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.732076 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40949cd7-9274-47e5-bf48-306fed2c0360-combined-ca-bundle\") pod \"openstackclient\" (UID: \"40949cd7-9274-47e5-bf48-306fed2c0360\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.732125 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/40949cd7-9274-47e5-bf48-306fed2c0360-openstack-config-secret\") pod \"openstackclient\" (UID: \"40949cd7-9274-47e5-bf48-306fed2c0360\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.732671 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b" (UID: "9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.733428 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/40949cd7-9274-47e5-bf48-306fed2c0360-openstack-config\") pod \"openstackclient\" (UID: \"40949cd7-9274-47e5-bf48-306fed2c0360\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.736020 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b" (UID: "9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.736282 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/40949cd7-9274-47e5-bf48-306fed2c0360-openstack-config-secret\") pod \"openstackclient\" (UID: \"40949cd7-9274-47e5-bf48-306fed2c0360\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.739392 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b" (UID: "9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.739480 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-kube-api-access-st5m6" (OuterVolumeSpecName: "kube-api-access-st5m6") pod "9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b" (UID: "9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b"). InnerVolumeSpecName "kube-api-access-st5m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.739982 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40949cd7-9274-47e5-bf48-306fed2c0360-combined-ca-bundle\") pod \"openstackclient\" (UID: \"40949cd7-9274-47e5-bf48-306fed2c0360\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.753143 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffl8p\" (UniqueName: \"kubernetes.io/projected/40949cd7-9274-47e5-bf48-306fed2c0360-kube-api-access-ffl8p\") pod \"openstackclient\" (UID: \"40949cd7-9274-47e5-bf48-306fed2c0360\") " pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.833830 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.833870 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st5m6\" (UniqueName: \"kubernetes.io/projected/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-kube-api-access-st5m6\") on node \"crc\" DevicePath \"\"" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.833881 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.833890 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:32:22 crc kubenswrapper[4796]: I1202 20:32:22.906690 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:23 crc kubenswrapper[4796]: I1202 20:32:23.285431 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b" path="/var/lib/kubelet/pods/9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b/volumes" Dec 02 20:32:23 crc kubenswrapper[4796]: I1202 20:32:23.406905 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Dec 02 20:32:23 crc kubenswrapper[4796]: I1202 20:32:23.691870 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"40949cd7-9274-47e5-bf48-306fed2c0360","Type":"ContainerStarted","Data":"7c385412d703a7a6529b9c5a261a31b59f7cef8a06da25d6333d33fe8487f859"} Dec 02 20:32:23 crc kubenswrapper[4796]: I1202 20:32:23.691903 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Dec 02 20:32:23 crc kubenswrapper[4796]: I1202 20:32:23.698941 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="9b0c1e5d-3f68-4ccb-ad6b-1de651b1f54b" podUID="40949cd7-9274-47e5-bf48-306fed2c0360" Dec 02 20:32:27 crc kubenswrapper[4796]: I1202 20:32:27.730764 4796 generic.go:334] "Generic (PLEG): container finished" podID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerID="fdde4ae9bc8377bb0a3947b0fe00d577b985aa8bd1fa89f6e8d1d49d95ba3248" exitCode=137 Dec 02 20:32:27 crc kubenswrapper[4796]: I1202 20:32:27.730850 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fec67e54-0540-45d4-b314-bd3ffc476b69","Type":"ContainerDied","Data":"fdde4ae9bc8377bb0a3947b0fe00d577b985aa8bd1fa89f6e8d1d49d95ba3248"} Dec 02 20:32:33 crc kubenswrapper[4796]: I1202 20:32:33.681099 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.121:3000/\": dial tcp 10.217.0.121:3000: connect: connection refused" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.184294 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.346632 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-config-data\") pod \"fec67e54-0540-45d4-b314-bd3ffc476b69\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.346697 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-sg-core-conf-yaml\") pod \"fec67e54-0540-45d4-b314-bd3ffc476b69\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.347133 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gbhm\" (UniqueName: \"kubernetes.io/projected/fec67e54-0540-45d4-b314-bd3ffc476b69-kube-api-access-4gbhm\") pod \"fec67e54-0540-45d4-b314-bd3ffc476b69\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.347228 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fec67e54-0540-45d4-b314-bd3ffc476b69-run-httpd\") pod \"fec67e54-0540-45d4-b314-bd3ffc476b69\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.347680 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-scripts\") pod \"fec67e54-0540-45d4-b314-bd3ffc476b69\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.347702 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fec67e54-0540-45d4-b314-bd3ffc476b69-log-httpd\") pod \"fec67e54-0540-45d4-b314-bd3ffc476b69\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.347728 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-combined-ca-bundle\") pod \"fec67e54-0540-45d4-b314-bd3ffc476b69\" (UID: \"fec67e54-0540-45d4-b314-bd3ffc476b69\") " Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.355098 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec67e54-0540-45d4-b314-bd3ffc476b69-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fec67e54-0540-45d4-b314-bd3ffc476b69" (UID: "fec67e54-0540-45d4-b314-bd3ffc476b69"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.355582 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec67e54-0540-45d4-b314-bd3ffc476b69-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fec67e54-0540-45d4-b314-bd3ffc476b69" (UID: "fec67e54-0540-45d4-b314-bd3ffc476b69"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.355818 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec67e54-0540-45d4-b314-bd3ffc476b69-kube-api-access-4gbhm" (OuterVolumeSpecName: "kube-api-access-4gbhm") pod "fec67e54-0540-45d4-b314-bd3ffc476b69" (UID: "fec67e54-0540-45d4-b314-bd3ffc476b69"). InnerVolumeSpecName "kube-api-access-4gbhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.360925 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gbhm\" (UniqueName: \"kubernetes.io/projected/fec67e54-0540-45d4-b314-bd3ffc476b69-kube-api-access-4gbhm\") on node \"crc\" DevicePath \"\"" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.360978 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fec67e54-0540-45d4-b314-bd3ffc476b69-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.360993 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fec67e54-0540-45d4-b314-bd3ffc476b69-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.364967 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-scripts" (OuterVolumeSpecName: "scripts") pod "fec67e54-0540-45d4-b314-bd3ffc476b69" (UID: "fec67e54-0540-45d4-b314-bd3ffc476b69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.393074 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fec67e54-0540-45d4-b314-bd3ffc476b69" (UID: "fec67e54-0540-45d4-b314-bd3ffc476b69"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.437618 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fec67e54-0540-45d4-b314-bd3ffc476b69" (UID: "fec67e54-0540-45d4-b314-bd3ffc476b69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.460178 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-config-data" (OuterVolumeSpecName: "config-data") pod "fec67e54-0540-45d4-b314-bd3ffc476b69" (UID: "fec67e54-0540-45d4-b314-bd3ffc476b69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.462387 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.462494 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.462575 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.462719 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fec67e54-0540-45d4-b314-bd3ffc476b69-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.842597 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fec67e54-0540-45d4-b314-bd3ffc476b69","Type":"ContainerDied","Data":"1fc480199dd9151e88069951268fad6be1f97db3ca065b6180abce17ef67fb32"} Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.842756 4796 scope.go:117] "RemoveContainer" containerID="fdde4ae9bc8377bb0a3947b0fe00d577b985aa8bd1fa89f6e8d1d49d95ba3248" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.842658 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.844911 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"40949cd7-9274-47e5-bf48-306fed2c0360","Type":"ContainerStarted","Data":"170deeacd99813106094acd0b3df61a8b48bea27a31056ca478b8e80a9ff8ffc"} Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.870372 4796 scope.go:117] "RemoveContainer" containerID="5c706a80e9b0ad765029569cc8958a3af9276e7ac77779de713328b86ca9c91b" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.880964 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstackclient" podStartSLOduration=2.335957765 podStartE2EDuration="13.880931049s" podCreationTimestamp="2025-12-02 20:32:22 +0000 UTC" firstStartedPulling="2025-12-02 20:32:23.40811259 +0000 UTC m=+1226.411488124" lastFinishedPulling="2025-12-02 20:32:34.953085874 +0000 UTC m=+1237.956461408" observedRunningTime="2025-12-02 20:32:35.877215508 +0000 UTC m=+1238.880591052" watchObservedRunningTime="2025-12-02 20:32:35.880931049 +0000 UTC m=+1238.884306593" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.929374 4796 scope.go:117] "RemoveContainer" containerID="7f8fddc6f418307c1cc48657d984bf069bd5b3ed5e1cebae4d2781cf760ec424" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.959672 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.972047 4796 scope.go:117] "RemoveContainer" containerID="78086db09ef45fc8c40a640dc406e17d21379bb93eade01ccd3a51840813d145" Dec 02 20:32:35 crc kubenswrapper[4796]: I1202 20:32:35.983886 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.004449 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:32:36 crc kubenswrapper[4796]: E1202 20:32:36.004849 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="sg-core" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.004868 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="sg-core" Dec 02 20:32:36 crc kubenswrapper[4796]: E1202 20:32:36.004884 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="proxy-httpd" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.004890 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="proxy-httpd" Dec 02 20:32:36 crc kubenswrapper[4796]: E1202 20:32:36.004899 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="ceilometer-notification-agent" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.004906 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="ceilometer-notification-agent" Dec 02 20:32:36 crc kubenswrapper[4796]: E1202 20:32:36.004913 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="ceilometer-central-agent" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.004920 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="ceilometer-central-agent" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.005086 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="ceilometer-central-agent" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.005099 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="ceilometer-notification-agent" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.005113 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="proxy-httpd" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.005124 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" containerName="sg-core" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.006613 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.014230 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.029074 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.029343 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.094501 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d7js\" (UniqueName: \"kubernetes.io/projected/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-kube-api-access-5d7js\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.094569 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.094596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.094635 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-log-httpd\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.094705 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-config-data\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.094726 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-run-httpd\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.094748 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-scripts\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.196358 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d7js\" (UniqueName: \"kubernetes.io/projected/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-kube-api-access-5d7js\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.196439 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.196460 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.196495 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-log-httpd\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.196558 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-config-data\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.196576 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-run-httpd\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.196599 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-scripts\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.202566 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-log-httpd\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.202678 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-run-httpd\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.207573 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-config-data\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.208210 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.211126 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.214240 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-scripts\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.224851 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d7js\" (UniqueName: \"kubernetes.io/projected/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-kube-api-access-5d7js\") pod \"ceilometer-0\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.354058 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:32:36 crc kubenswrapper[4796]: I1202 20:32:36.879243 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:32:36 crc kubenswrapper[4796]: W1202 20:32:36.885566 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27a71b6c_6980_4fb2_b7c3_021f85f3ec8b.slice/crio-a454d6d6d3336d6c279ce5f3848dd9cc9bf1b4f9cf66697a4c1a254a1a5ea31d WatchSource:0}: Error finding container a454d6d6d3336d6c279ce5f3848dd9cc9bf1b4f9cf66697a4c1a254a1a5ea31d: Status 404 returned error can't find the container with id a454d6d6d3336d6c279ce5f3848dd9cc9bf1b4f9cf66697a4c1a254a1a5ea31d Dec 02 20:32:37 crc kubenswrapper[4796]: I1202 20:32:37.277751 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fec67e54-0540-45d4-b314-bd3ffc476b69" path="/var/lib/kubelet/pods/fec67e54-0540-45d4-b314-bd3ffc476b69/volumes" Dec 02 20:32:37 crc kubenswrapper[4796]: I1202 20:32:37.872515 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b","Type":"ContainerStarted","Data":"c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858"} Dec 02 20:32:37 crc kubenswrapper[4796]: I1202 20:32:37.873057 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b","Type":"ContainerStarted","Data":"a454d6d6d3336d6c279ce5f3848dd9cc9bf1b4f9cf66697a4c1a254a1a5ea31d"} Dec 02 20:32:38 crc kubenswrapper[4796]: I1202 20:32:38.883552 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b","Type":"ContainerStarted","Data":"8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32"} Dec 02 20:32:39 crc kubenswrapper[4796]: I1202 20:32:39.897370 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b","Type":"ContainerStarted","Data":"4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259"} Dec 02 20:32:40 crc kubenswrapper[4796]: I1202 20:32:40.909714 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b","Type":"ContainerStarted","Data":"6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1"} Dec 02 20:32:40 crc kubenswrapper[4796]: I1202 20:32:40.911088 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:06 crc kubenswrapper[4796]: I1202 20:33:06.368926 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:06 crc kubenswrapper[4796]: I1202 20:33:06.428243 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=28.03863519 podStartE2EDuration="31.428217141s" podCreationTimestamp="2025-12-02 20:32:35 +0000 UTC" firstStartedPulling="2025-12-02 20:32:36.889701168 +0000 UTC m=+1239.893076702" lastFinishedPulling="2025-12-02 20:32:40.279283119 +0000 UTC m=+1243.282658653" observedRunningTime="2025-12-02 20:32:40.946683494 +0000 UTC m=+1243.950059028" watchObservedRunningTime="2025-12-02 20:33:06.428217141 +0000 UTC m=+1269.431592675" Dec 02 20:33:09 crc kubenswrapper[4796]: I1202 20:33:09.139598 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 20:33:09 crc kubenswrapper[4796]: I1202 20:33:09.140201 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="52ef12c4-98d6-4208-b52c-b32a152b87bc" containerName="kube-state-metrics" containerID="cri-o://41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c" gracePeriod=30 Dec 02 20:33:09 crc kubenswrapper[4796]: I1202 20:33:09.712657 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:09 crc kubenswrapper[4796]: I1202 20:33:09.763851 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5krrx\" (UniqueName: \"kubernetes.io/projected/52ef12c4-98d6-4208-b52c-b32a152b87bc-kube-api-access-5krrx\") pod \"52ef12c4-98d6-4208-b52c-b32a152b87bc\" (UID: \"52ef12c4-98d6-4208-b52c-b32a152b87bc\") " Dec 02 20:33:09 crc kubenswrapper[4796]: I1202 20:33:09.772581 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ef12c4-98d6-4208-b52c-b32a152b87bc-kube-api-access-5krrx" (OuterVolumeSpecName: "kube-api-access-5krrx") pod "52ef12c4-98d6-4208-b52c-b32a152b87bc" (UID: "52ef12c4-98d6-4208-b52c-b32a152b87bc"). InnerVolumeSpecName "kube-api-access-5krrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:33:09 crc kubenswrapper[4796]: I1202 20:33:09.865851 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5krrx\" (UniqueName: \"kubernetes.io/projected/52ef12c4-98d6-4208-b52c-b32a152b87bc-kube-api-access-5krrx\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.187120 4796 generic.go:334] "Generic (PLEG): container finished" podID="52ef12c4-98d6-4208-b52c-b32a152b87bc" containerID="41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c" exitCode=2 Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.187174 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"52ef12c4-98d6-4208-b52c-b32a152b87bc","Type":"ContainerDied","Data":"41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c"} Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.187205 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"52ef12c4-98d6-4208-b52c-b32a152b87bc","Type":"ContainerDied","Data":"6b43e155c753412fc53b8d234d1e68baddf05c659d04dbd1cd01218837607422"} Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.187225 4796 scope.go:117] "RemoveContainer" containerID="41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.187380 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.234658 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.258358 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.259463 4796 scope.go:117] "RemoveContainer" containerID="41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c" Dec 02 20:33:10 crc kubenswrapper[4796]: E1202 20:33:10.260333 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c\": container with ID starting with 41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c not found: ID does not exist" containerID="41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.260372 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c"} err="failed to get container status \"41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c\": rpc error: code = NotFound desc = could not find container \"41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c\": container with ID starting with 41c0216f62af48ce51ad081afc04dfde2ef3d51a52fc0ca713289c45dfa5be6c not found: ID does not exist" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.271238 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 20:33:10 crc kubenswrapper[4796]: E1202 20:33:10.271724 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ef12c4-98d6-4208-b52c-b32a152b87bc" containerName="kube-state-metrics" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.271740 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ef12c4-98d6-4208-b52c-b32a152b87bc" containerName="kube-state-metrics" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.271951 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ef12c4-98d6-4208-b52c-b32a152b87bc" containerName="kube-state-metrics" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.272762 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.275432 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-kube-state-metrics-svc" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.275677 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"kube-state-metrics-tls-config" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.285558 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.375371 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs7c7\" (UniqueName: \"kubernetes.io/projected/353c1dae-27b5-40ce-b56c-f521add86d37-kube-api-access-gs7c7\") pod \"kube-state-metrics-0\" (UID: \"353c1dae-27b5-40ce-b56c-f521add86d37\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.375680 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/353c1dae-27b5-40ce-b56c-f521add86d37-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"353c1dae-27b5-40ce-b56c-f521add86d37\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.375742 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/353c1dae-27b5-40ce-b56c-f521add86d37-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"353c1dae-27b5-40ce-b56c-f521add86d37\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.375776 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353c1dae-27b5-40ce-b56c-f521add86d37-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"353c1dae-27b5-40ce-b56c-f521add86d37\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.470806 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.471307 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="ceilometer-central-agent" containerID="cri-o://c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858" gracePeriod=30 Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.471396 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="proxy-httpd" containerID="cri-o://6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1" gracePeriod=30 Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.471512 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="ceilometer-notification-agent" containerID="cri-o://8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32" gracePeriod=30 Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.471421 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="sg-core" containerID="cri-o://4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259" gracePeriod=30 Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.479320 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/353c1dae-27b5-40ce-b56c-f521add86d37-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"353c1dae-27b5-40ce-b56c-f521add86d37\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.479397 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353c1dae-27b5-40ce-b56c-f521add86d37-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"353c1dae-27b5-40ce-b56c-f521add86d37\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.479506 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs7c7\" (UniqueName: \"kubernetes.io/projected/353c1dae-27b5-40ce-b56c-f521add86d37-kube-api-access-gs7c7\") pod \"kube-state-metrics-0\" (UID: \"353c1dae-27b5-40ce-b56c-f521add86d37\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.479557 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/353c1dae-27b5-40ce-b56c-f521add86d37-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"353c1dae-27b5-40ce-b56c-f521add86d37\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.487757 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353c1dae-27b5-40ce-b56c-f521add86d37-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"353c1dae-27b5-40ce-b56c-f521add86d37\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.487779 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/353c1dae-27b5-40ce-b56c-f521add86d37-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"353c1dae-27b5-40ce-b56c-f521add86d37\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.493163 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/353c1dae-27b5-40ce-b56c-f521add86d37-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"353c1dae-27b5-40ce-b56c-f521add86d37\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.503588 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs7c7\" (UniqueName: \"kubernetes.io/projected/353c1dae-27b5-40ce-b56c-f521add86d37-kube-api-access-gs7c7\") pod \"kube-state-metrics-0\" (UID: \"353c1dae-27b5-40ce-b56c-f521add86d37\") " pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:10 crc kubenswrapper[4796]: I1202 20:33:10.599430 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:11 crc kubenswrapper[4796]: I1202 20:33:11.090665 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Dec 02 20:33:11 crc kubenswrapper[4796]: I1202 20:33:11.096048 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 20:33:11 crc kubenswrapper[4796]: I1202 20:33:11.197751 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"353c1dae-27b5-40ce-b56c-f521add86d37","Type":"ContainerStarted","Data":"1f025bf766b0cfcb5519eb078708d809dd74e9338448e56c2fb356e15a47f0c3"} Dec 02 20:33:11 crc kubenswrapper[4796]: I1202 20:33:11.200848 4796 generic.go:334] "Generic (PLEG): container finished" podID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerID="6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1" exitCode=0 Dec 02 20:33:11 crc kubenswrapper[4796]: I1202 20:33:11.200883 4796 generic.go:334] "Generic (PLEG): container finished" podID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerID="4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259" exitCode=2 Dec 02 20:33:11 crc kubenswrapper[4796]: I1202 20:33:11.200891 4796 generic.go:334] "Generic (PLEG): container finished" podID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerID="c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858" exitCode=0 Dec 02 20:33:11 crc kubenswrapper[4796]: I1202 20:33:11.200917 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b","Type":"ContainerDied","Data":"6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1"} Dec 02 20:33:11 crc kubenswrapper[4796]: I1202 20:33:11.200948 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b","Type":"ContainerDied","Data":"4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259"} Dec 02 20:33:11 crc kubenswrapper[4796]: I1202 20:33:11.200958 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b","Type":"ContainerDied","Data":"c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858"} Dec 02 20:33:11 crc kubenswrapper[4796]: I1202 20:33:11.273988 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ef12c4-98d6-4208-b52c-b32a152b87bc" path="/var/lib/kubelet/pods/52ef12c4-98d6-4208-b52c-b32a152b87bc/volumes" Dec 02 20:33:12 crc kubenswrapper[4796]: I1202 20:33:12.210582 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"353c1dae-27b5-40ce-b56c-f521add86d37","Type":"ContainerStarted","Data":"d85cec03b024a81db5c267607f5a4ebdf2b9f529a3e33ef9b76d97c099de9297"} Dec 02 20:33:12 crc kubenswrapper[4796]: I1202 20:33:12.211090 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:12 crc kubenswrapper[4796]: I1202 20:33:12.234593 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=1.83769495 podStartE2EDuration="2.234565949s" podCreationTimestamp="2025-12-02 20:33:10 +0000 UTC" firstStartedPulling="2025-12-02 20:33:11.095760774 +0000 UTC m=+1274.099136308" lastFinishedPulling="2025-12-02 20:33:11.492631773 +0000 UTC m=+1274.496007307" observedRunningTime="2025-12-02 20:33:12.227566248 +0000 UTC m=+1275.230941782" watchObservedRunningTime="2025-12-02 20:33:12.234565949 +0000 UTC m=+1275.237941483" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.237063 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-fztln"] Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.238425 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-fztln" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.247191 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-09d1-account-create-update-6d25l"] Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.248621 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.263626 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.288572 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-09d1-account-create-update-6d25l"] Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.288621 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-fztln"] Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.338315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e39c55c-f8eb-4fbf-8c26-52189974b68c-operator-scripts\") pod \"watcher-db-create-fztln\" (UID: \"4e39c55c-f8eb-4fbf-8c26-52189974b68c\") " pod="watcher-kuttl-default/watcher-db-create-fztln" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.338375 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzvk2\" (UniqueName: \"kubernetes.io/projected/4e39c55c-f8eb-4fbf-8c26-52189974b68c-kube-api-access-tzvk2\") pod \"watcher-db-create-fztln\" (UID: \"4e39c55c-f8eb-4fbf-8c26-52189974b68c\") " pod="watcher-kuttl-default/watcher-db-create-fztln" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.338411 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/246b4725-b3de-4c2d-850f-8c02c1066622-operator-scripts\") pod \"watcher-09d1-account-create-update-6d25l\" (UID: \"246b4725-b3de-4c2d-850f-8c02c1066622\") " pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.338527 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrbbf\" (UniqueName: \"kubernetes.io/projected/246b4725-b3de-4c2d-850f-8c02c1066622-kube-api-access-jrbbf\") pod \"watcher-09d1-account-create-update-6d25l\" (UID: \"246b4725-b3de-4c2d-850f-8c02c1066622\") " pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.440025 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e39c55c-f8eb-4fbf-8c26-52189974b68c-operator-scripts\") pod \"watcher-db-create-fztln\" (UID: \"4e39c55c-f8eb-4fbf-8c26-52189974b68c\") " pod="watcher-kuttl-default/watcher-db-create-fztln" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.440107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzvk2\" (UniqueName: \"kubernetes.io/projected/4e39c55c-f8eb-4fbf-8c26-52189974b68c-kube-api-access-tzvk2\") pod \"watcher-db-create-fztln\" (UID: \"4e39c55c-f8eb-4fbf-8c26-52189974b68c\") " pod="watcher-kuttl-default/watcher-db-create-fztln" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.440139 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/246b4725-b3de-4c2d-850f-8c02c1066622-operator-scripts\") pod \"watcher-09d1-account-create-update-6d25l\" (UID: \"246b4725-b3de-4c2d-850f-8c02c1066622\") " pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.440222 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrbbf\" (UniqueName: \"kubernetes.io/projected/246b4725-b3de-4c2d-850f-8c02c1066622-kube-api-access-jrbbf\") pod \"watcher-09d1-account-create-update-6d25l\" (UID: \"246b4725-b3de-4c2d-850f-8c02c1066622\") " pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.441163 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e39c55c-f8eb-4fbf-8c26-52189974b68c-operator-scripts\") pod \"watcher-db-create-fztln\" (UID: \"4e39c55c-f8eb-4fbf-8c26-52189974b68c\") " pod="watcher-kuttl-default/watcher-db-create-fztln" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.441163 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/246b4725-b3de-4c2d-850f-8c02c1066622-operator-scripts\") pod \"watcher-09d1-account-create-update-6d25l\" (UID: \"246b4725-b3de-4c2d-850f-8c02c1066622\") " pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.461398 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrbbf\" (UniqueName: \"kubernetes.io/projected/246b4725-b3de-4c2d-850f-8c02c1066622-kube-api-access-jrbbf\") pod \"watcher-09d1-account-create-update-6d25l\" (UID: \"246b4725-b3de-4c2d-850f-8c02c1066622\") " pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.468098 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzvk2\" (UniqueName: \"kubernetes.io/projected/4e39c55c-f8eb-4fbf-8c26-52189974b68c-kube-api-access-tzvk2\") pod \"watcher-db-create-fztln\" (UID: \"4e39c55c-f8eb-4fbf-8c26-52189974b68c\") " pod="watcher-kuttl-default/watcher-db-create-fztln" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.559006 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-fztln" Dec 02 20:33:13 crc kubenswrapper[4796]: I1202 20:33:13.585716 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.076930 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-09d1-account-create-update-6d25l"] Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.099280 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-fztln"] Dec 02 20:33:14 crc kubenswrapper[4796]: W1202 20:33:14.101843 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e39c55c_f8eb_4fbf_8c26_52189974b68c.slice/crio-23edce66e0ef128fd9d3b4b518ba7005e78ac0dfb38ed9578c04db635bf554ae WatchSource:0}: Error finding container 23edce66e0ef128fd9d3b4b518ba7005e78ac0dfb38ed9578c04db635bf554ae: Status 404 returned error can't find the container with id 23edce66e0ef128fd9d3b4b518ba7005e78ac0dfb38ed9578c04db635bf554ae Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.232202 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" event={"ID":"246b4725-b3de-4c2d-850f-8c02c1066622","Type":"ContainerStarted","Data":"36e1a29d7e6e1cfdbc2102cc7e94b2fe79610b75cc23bf65cff30de343b2a26f"} Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.233788 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-fztln" event={"ID":"4e39c55c-f8eb-4fbf-8c26-52189974b68c","Type":"ContainerStarted","Data":"23edce66e0ef128fd9d3b4b518ba7005e78ac0dfb38ed9578c04db635bf554ae"} Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.900360 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.973432 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-combined-ca-bundle\") pod \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.973523 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-log-httpd\") pod \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.973563 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-sg-core-conf-yaml\") pod \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.973589 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-config-data\") pod \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.973616 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-run-httpd\") pod \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.973691 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d7js\" (UniqueName: \"kubernetes.io/projected/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-kube-api-access-5d7js\") pod \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.973749 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-scripts\") pod \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\" (UID: \"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b\") " Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.974320 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" (UID: "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.975123 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" (UID: "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.991337 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-scripts" (OuterVolumeSpecName: "scripts") pod "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" (UID: "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.998471 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-kube-api-access-5d7js" (OuterVolumeSpecName: "kube-api-access-5d7js") pod "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" (UID: "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b"). InnerVolumeSpecName "kube-api-access-5d7js". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:33:14 crc kubenswrapper[4796]: I1202 20:33:14.999468 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" (UID: "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.034138 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" (UID: "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.062662 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-config-data" (OuterVolumeSpecName: "config-data") pod "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" (UID: "27a71b6c-6980-4fb2-b7c3-021f85f3ec8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.084218 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.084283 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.084298 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.084309 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.084321 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d7js\" (UniqueName: \"kubernetes.io/projected/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-kube-api-access-5d7js\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.084333 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.084342 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.246040 4796 generic.go:334] "Generic (PLEG): container finished" podID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerID="8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32" exitCode=0 Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.246130 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b","Type":"ContainerDied","Data":"8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32"} Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.246164 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27a71b6c-6980-4fb2-b7c3-021f85f3ec8b","Type":"ContainerDied","Data":"a454d6d6d3336d6c279ce5f3848dd9cc9bf1b4f9cf66697a4c1a254a1a5ea31d"} Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.246182 4796 scope.go:117] "RemoveContainer" containerID="6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.246449 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.248343 4796 generic.go:334] "Generic (PLEG): container finished" podID="246b4725-b3de-4c2d-850f-8c02c1066622" containerID="a15d4448cfb7cb1d1327136fb2ac5ebe9f47ab6220d38091735a68069057e09e" exitCode=0 Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.248426 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" event={"ID":"246b4725-b3de-4c2d-850f-8c02c1066622","Type":"ContainerDied","Data":"a15d4448cfb7cb1d1327136fb2ac5ebe9f47ab6220d38091735a68069057e09e"} Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.250702 4796 generic.go:334] "Generic (PLEG): container finished" podID="4e39c55c-f8eb-4fbf-8c26-52189974b68c" containerID="fc851b311078a7af7e708c1ac894e849e46db3980e13fdce30e068515f964f79" exitCode=0 Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.250762 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-fztln" event={"ID":"4e39c55c-f8eb-4fbf-8c26-52189974b68c","Type":"ContainerDied","Data":"fc851b311078a7af7e708c1ac894e849e46db3980e13fdce30e068515f964f79"} Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.265934 4796 scope.go:117] "RemoveContainer" containerID="4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.293510 4796 scope.go:117] "RemoveContainer" containerID="8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.314091 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.324041 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.327626 4796 scope.go:117] "RemoveContainer" containerID="c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.341881 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:33:15 crc kubenswrapper[4796]: E1202 20:33:15.342328 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="ceilometer-notification-agent" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.342349 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="ceilometer-notification-agent" Dec 02 20:33:15 crc kubenswrapper[4796]: E1202 20:33:15.342380 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="ceilometer-central-agent" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.342386 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="ceilometer-central-agent" Dec 02 20:33:15 crc kubenswrapper[4796]: E1202 20:33:15.342407 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="proxy-httpd" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.343515 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="proxy-httpd" Dec 02 20:33:15 crc kubenswrapper[4796]: E1202 20:33:15.343533 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="sg-core" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.343540 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="sg-core" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.343696 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="proxy-httpd" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.343709 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="sg-core" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.343718 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="ceilometer-notification-agent" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.343734 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" containerName="ceilometer-central-agent" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.345558 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.349894 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.350061 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.350197 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.362775 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.379381 4796 scope.go:117] "RemoveContainer" containerID="6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1" Dec 02 20:33:15 crc kubenswrapper[4796]: E1202 20:33:15.379889 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1\": container with ID starting with 6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1 not found: ID does not exist" containerID="6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.379921 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1"} err="failed to get container status \"6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1\": rpc error: code = NotFound desc = could not find container \"6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1\": container with ID starting with 6d367d389dcef9b6ac22acc7449fafe281efe3942ea73f7e9fbe95f811bcbad1 not found: ID does not exist" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.379945 4796 scope.go:117] "RemoveContainer" containerID="4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259" Dec 02 20:33:15 crc kubenswrapper[4796]: E1202 20:33:15.380231 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259\": container with ID starting with 4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259 not found: ID does not exist" containerID="4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.380272 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259"} err="failed to get container status \"4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259\": rpc error: code = NotFound desc = could not find container \"4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259\": container with ID starting with 4f4d656d7487dfe493ec87a8400f050e6e64875c5089bc6c5ad5bacf9d171259 not found: ID does not exist" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.380286 4796 scope.go:117] "RemoveContainer" containerID="8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32" Dec 02 20:33:15 crc kubenswrapper[4796]: E1202 20:33:15.380657 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32\": container with ID starting with 8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32 not found: ID does not exist" containerID="8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.380678 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32"} err="failed to get container status \"8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32\": rpc error: code = NotFound desc = could not find container \"8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32\": container with ID starting with 8a7635569f5411e713e7070d61f18c87eb52eb4604d097532a11a25d808a5b32 not found: ID does not exist" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.380694 4796 scope.go:117] "RemoveContainer" containerID="c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858" Dec 02 20:33:15 crc kubenswrapper[4796]: E1202 20:33:15.380934 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858\": container with ID starting with c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858 not found: ID does not exist" containerID="c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.380958 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858"} err="failed to get container status \"c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858\": rpc error: code = NotFound desc = could not find container \"c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858\": container with ID starting with c00fdd849166ced0bdb2f9410377333f969905fb79cc9b45cf07bc4d21970858 not found: ID does not exist" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.494920 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-config-data\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.495565 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.495611 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvqrx\" (UniqueName: \"kubernetes.io/projected/48c6ccd1-ed03-4648-9633-e7fc3f806d55-kube-api-access-tvqrx\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.495644 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.495700 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48c6ccd1-ed03-4648-9633-e7fc3f806d55-log-httpd\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.495858 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48c6ccd1-ed03-4648-9633-e7fc3f806d55-run-httpd\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.495955 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-scripts\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.496019 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.597762 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-config-data\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.597819 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.597846 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvqrx\" (UniqueName: \"kubernetes.io/projected/48c6ccd1-ed03-4648-9633-e7fc3f806d55-kube-api-access-tvqrx\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.597862 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.597888 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48c6ccd1-ed03-4648-9633-e7fc3f806d55-log-httpd\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.597922 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48c6ccd1-ed03-4648-9633-e7fc3f806d55-run-httpd\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.597954 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-scripts\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.597979 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.599086 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48c6ccd1-ed03-4648-9633-e7fc3f806d55-run-httpd\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.599629 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48c6ccd1-ed03-4648-9633-e7fc3f806d55-log-httpd\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.605995 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-config-data\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.609842 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.620390 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-scripts\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.634585 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.634625 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.639010 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvqrx\" (UniqueName: \"kubernetes.io/projected/48c6ccd1-ed03-4648-9633-e7fc3f806d55-kube-api-access-tvqrx\") pod \"ceilometer-0\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:15 crc kubenswrapper[4796]: I1202 20:33:15.686164 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.055332 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:33:16 crc kubenswrapper[4796]: W1202 20:33:16.062418 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48c6ccd1_ed03_4648_9633_e7fc3f806d55.slice/crio-c26615b6106981aefdb225eea243bf72747d54937c65d6aa4375233f6d6aae34 WatchSource:0}: Error finding container c26615b6106981aefdb225eea243bf72747d54937c65d6aa4375233f6d6aae34: Status 404 returned error can't find the container with id c26615b6106981aefdb225eea243bf72747d54937c65d6aa4375233f6d6aae34 Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.289132 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48c6ccd1-ed03-4648-9633-e7fc3f806d55","Type":"ContainerStarted","Data":"c26615b6106981aefdb225eea243bf72747d54937c65d6aa4375233f6d6aae34"} Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.728602 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.735226 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-fztln" Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.819872 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e39c55c-f8eb-4fbf-8c26-52189974b68c-operator-scripts\") pod \"4e39c55c-f8eb-4fbf-8c26-52189974b68c\" (UID: \"4e39c55c-f8eb-4fbf-8c26-52189974b68c\") " Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.819952 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/246b4725-b3de-4c2d-850f-8c02c1066622-operator-scripts\") pod \"246b4725-b3de-4c2d-850f-8c02c1066622\" (UID: \"246b4725-b3de-4c2d-850f-8c02c1066622\") " Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.820072 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrbbf\" (UniqueName: \"kubernetes.io/projected/246b4725-b3de-4c2d-850f-8c02c1066622-kube-api-access-jrbbf\") pod \"246b4725-b3de-4c2d-850f-8c02c1066622\" (UID: \"246b4725-b3de-4c2d-850f-8c02c1066622\") " Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.820105 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzvk2\" (UniqueName: \"kubernetes.io/projected/4e39c55c-f8eb-4fbf-8c26-52189974b68c-kube-api-access-tzvk2\") pod \"4e39c55c-f8eb-4fbf-8c26-52189974b68c\" (UID: \"4e39c55c-f8eb-4fbf-8c26-52189974b68c\") " Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.822263 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e39c55c-f8eb-4fbf-8c26-52189974b68c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e39c55c-f8eb-4fbf-8c26-52189974b68c" (UID: "4e39c55c-f8eb-4fbf-8c26-52189974b68c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.822410 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/246b4725-b3de-4c2d-850f-8c02c1066622-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "246b4725-b3de-4c2d-850f-8c02c1066622" (UID: "246b4725-b3de-4c2d-850f-8c02c1066622"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.841621 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e39c55c-f8eb-4fbf-8c26-52189974b68c-kube-api-access-tzvk2" (OuterVolumeSpecName: "kube-api-access-tzvk2") pod "4e39c55c-f8eb-4fbf-8c26-52189974b68c" (UID: "4e39c55c-f8eb-4fbf-8c26-52189974b68c"). InnerVolumeSpecName "kube-api-access-tzvk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.841685 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/246b4725-b3de-4c2d-850f-8c02c1066622-kube-api-access-jrbbf" (OuterVolumeSpecName: "kube-api-access-jrbbf") pod "246b4725-b3de-4c2d-850f-8c02c1066622" (UID: "246b4725-b3de-4c2d-850f-8c02c1066622"). InnerVolumeSpecName "kube-api-access-jrbbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.921690 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/246b4725-b3de-4c2d-850f-8c02c1066622-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.921840 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrbbf\" (UniqueName: \"kubernetes.io/projected/246b4725-b3de-4c2d-850f-8c02c1066622-kube-api-access-jrbbf\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.921897 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzvk2\" (UniqueName: \"kubernetes.io/projected/4e39c55c-f8eb-4fbf-8c26-52189974b68c-kube-api-access-tzvk2\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:16 crc kubenswrapper[4796]: I1202 20:33:16.921952 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e39c55c-f8eb-4fbf-8c26-52189974b68c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:17 crc kubenswrapper[4796]: I1202 20:33:17.277567 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a71b6c-6980-4fb2-b7c3-021f85f3ec8b" path="/var/lib/kubelet/pods/27a71b6c-6980-4fb2-b7c3-021f85f3ec8b/volumes" Dec 02 20:33:17 crc kubenswrapper[4796]: I1202 20:33:17.309996 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48c6ccd1-ed03-4648-9633-e7fc3f806d55","Type":"ContainerStarted","Data":"0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd"} Dec 02 20:33:17 crc kubenswrapper[4796]: I1202 20:33:17.311936 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-fztln" event={"ID":"4e39c55c-f8eb-4fbf-8c26-52189974b68c","Type":"ContainerDied","Data":"23edce66e0ef128fd9d3b4b518ba7005e78ac0dfb38ed9578c04db635bf554ae"} Dec 02 20:33:17 crc kubenswrapper[4796]: I1202 20:33:17.311967 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23edce66e0ef128fd9d3b4b518ba7005e78ac0dfb38ed9578c04db635bf554ae" Dec 02 20:33:17 crc kubenswrapper[4796]: I1202 20:33:17.312035 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-fztln" Dec 02 20:33:17 crc kubenswrapper[4796]: I1202 20:33:17.321204 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" event={"ID":"246b4725-b3de-4c2d-850f-8c02c1066622","Type":"ContainerDied","Data":"36e1a29d7e6e1cfdbc2102cc7e94b2fe79610b75cc23bf65cff30de343b2a26f"} Dec 02 20:33:17 crc kubenswrapper[4796]: I1202 20:33:17.321670 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36e1a29d7e6e1cfdbc2102cc7e94b2fe79610b75cc23bf65cff30de343b2a26f" Dec 02 20:33:17 crc kubenswrapper[4796]: I1202 20:33:17.321361 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-09d1-account-create-update-6d25l" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.337026 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48c6ccd1-ed03-4648-9633-e7fc3f806d55","Type":"ContainerStarted","Data":"0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547"} Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.337493 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48c6ccd1-ed03-4648-9633-e7fc3f806d55","Type":"ContainerStarted","Data":"7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1"} Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.696487 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx"] Dec 02 20:33:18 crc kubenswrapper[4796]: E1202 20:33:18.696872 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246b4725-b3de-4c2d-850f-8c02c1066622" containerName="mariadb-account-create-update" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.696892 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="246b4725-b3de-4c2d-850f-8c02c1066622" containerName="mariadb-account-create-update" Dec 02 20:33:18 crc kubenswrapper[4796]: E1202 20:33:18.696926 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e39c55c-f8eb-4fbf-8c26-52189974b68c" containerName="mariadb-database-create" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.696933 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e39c55c-f8eb-4fbf-8c26-52189974b68c" containerName="mariadb-database-create" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.697098 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="246b4725-b3de-4c2d-850f-8c02c1066622" containerName="mariadb-account-create-update" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.697118 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e39c55c-f8eb-4fbf-8c26-52189974b68c" containerName="mariadb-database-create" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.697922 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.700065 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.704551 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-fnk6w" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.711739 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx"] Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.753710 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-wl7wx\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.753793 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr7ll\" (UniqueName: \"kubernetes.io/projected/26036a99-f656-4eb9-8872-be62c7aa833b-kube-api-access-fr7ll\") pod \"watcher-kuttl-db-sync-wl7wx\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.753823 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-config-data\") pod \"watcher-kuttl-db-sync-wl7wx\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.753845 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-db-sync-config-data\") pod \"watcher-kuttl-db-sync-wl7wx\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.854859 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-wl7wx\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.854947 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr7ll\" (UniqueName: \"kubernetes.io/projected/26036a99-f656-4eb9-8872-be62c7aa833b-kube-api-access-fr7ll\") pod \"watcher-kuttl-db-sync-wl7wx\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.854983 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-config-data\") pod \"watcher-kuttl-db-sync-wl7wx\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.855003 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-db-sync-config-data\") pod \"watcher-kuttl-db-sync-wl7wx\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.859416 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-db-sync-config-data\") pod \"watcher-kuttl-db-sync-wl7wx\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.861154 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-wl7wx\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.866116 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-config-data\") pod \"watcher-kuttl-db-sync-wl7wx\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:18 crc kubenswrapper[4796]: I1202 20:33:18.869527 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr7ll\" (UniqueName: \"kubernetes.io/projected/26036a99-f656-4eb9-8872-be62c7aa833b-kube-api-access-fr7ll\") pod \"watcher-kuttl-db-sync-wl7wx\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:19 crc kubenswrapper[4796]: I1202 20:33:19.017122 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:19 crc kubenswrapper[4796]: I1202 20:33:19.503811 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx"] Dec 02 20:33:19 crc kubenswrapper[4796]: W1202 20:33:19.512685 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26036a99_f656_4eb9_8872_be62c7aa833b.slice/crio-359db6bc976b9453af270bf6efba581a02756aeedf39d9a53e7cdb9d583e9cf3 WatchSource:0}: Error finding container 359db6bc976b9453af270bf6efba581a02756aeedf39d9a53e7cdb9d583e9cf3: Status 404 returned error can't find the container with id 359db6bc976b9453af270bf6efba581a02756aeedf39d9a53e7cdb9d583e9cf3 Dec 02 20:33:20 crc kubenswrapper[4796]: I1202 20:33:20.378938 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48c6ccd1-ed03-4648-9633-e7fc3f806d55","Type":"ContainerStarted","Data":"14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd"} Dec 02 20:33:20 crc kubenswrapper[4796]: I1202 20:33:20.379774 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:20 crc kubenswrapper[4796]: I1202 20:33:20.383123 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" event={"ID":"26036a99-f656-4eb9-8872-be62c7aa833b","Type":"ContainerStarted","Data":"359db6bc976b9453af270bf6efba581a02756aeedf39d9a53e7cdb9d583e9cf3"} Dec 02 20:33:20 crc kubenswrapper[4796]: I1202 20:33:20.402566 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.057533317 podStartE2EDuration="5.402540219s" podCreationTimestamp="2025-12-02 20:33:15 +0000 UTC" firstStartedPulling="2025-12-02 20:33:16.065100325 +0000 UTC m=+1279.068475859" lastFinishedPulling="2025-12-02 20:33:19.410107227 +0000 UTC m=+1282.413482761" observedRunningTime="2025-12-02 20:33:20.399756641 +0000 UTC m=+1283.403132175" watchObservedRunningTime="2025-12-02 20:33:20.402540219 +0000 UTC m=+1283.405915753" Dec 02 20:33:20 crc kubenswrapper[4796]: I1202 20:33:20.620319 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Dec 02 20:33:35 crc kubenswrapper[4796]: E1202 20:33:35.191908 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.113:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Dec 02 20:33:35 crc kubenswrapper[4796]: E1202 20:33:35.192680 4796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.113:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Dec 02 20:33:35 crc kubenswrapper[4796]: E1202 20:33:35.192843 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-kuttl-db-sync,Image:38.102.83.113:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr7ll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-kuttl-db-sync-wl7wx_watcher-kuttl-default(26036a99-f656-4eb9-8872-be62c7aa833b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 20:33:35 crc kubenswrapper[4796]: E1202 20:33:35.194147 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" podUID="26036a99-f656-4eb9-8872-be62c7aa833b" Dec 02 20:33:35 crc kubenswrapper[4796]: E1202 20:33:35.548366 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.113:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" podUID="26036a99-f656-4eb9-8872-be62c7aa833b" Dec 02 20:33:45 crc kubenswrapper[4796]: I1202 20:33:45.704114 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:33:47 crc kubenswrapper[4796]: I1202 20:33:47.677942 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" event={"ID":"26036a99-f656-4eb9-8872-be62c7aa833b","Type":"ContainerStarted","Data":"2d07824fb9033fb5945a1d9f5a408ae5f38afa3c92f14ff7c6117b16db4f5a45"} Dec 02 20:33:50 crc kubenswrapper[4796]: I1202 20:33:50.716800 4796 generic.go:334] "Generic (PLEG): container finished" podID="26036a99-f656-4eb9-8872-be62c7aa833b" containerID="2d07824fb9033fb5945a1d9f5a408ae5f38afa3c92f14ff7c6117b16db4f5a45" exitCode=0 Dec 02 20:33:50 crc kubenswrapper[4796]: I1202 20:33:50.716929 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" event={"ID":"26036a99-f656-4eb9-8872-be62c7aa833b","Type":"ContainerDied","Data":"2d07824fb9033fb5945a1d9f5a408ae5f38afa3c92f14ff7c6117b16db4f5a45"} Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.164476 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.222378 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr7ll\" (UniqueName: \"kubernetes.io/projected/26036a99-f656-4eb9-8872-be62c7aa833b-kube-api-access-fr7ll\") pod \"26036a99-f656-4eb9-8872-be62c7aa833b\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.222490 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-config-data\") pod \"26036a99-f656-4eb9-8872-be62c7aa833b\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.222538 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-combined-ca-bundle\") pod \"26036a99-f656-4eb9-8872-be62c7aa833b\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.222718 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-db-sync-config-data\") pod \"26036a99-f656-4eb9-8872-be62c7aa833b\" (UID: \"26036a99-f656-4eb9-8872-be62c7aa833b\") " Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.230367 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26036a99-f656-4eb9-8872-be62c7aa833b-kube-api-access-fr7ll" (OuterVolumeSpecName: "kube-api-access-fr7ll") pod "26036a99-f656-4eb9-8872-be62c7aa833b" (UID: "26036a99-f656-4eb9-8872-be62c7aa833b"). InnerVolumeSpecName "kube-api-access-fr7ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.241896 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "26036a99-f656-4eb9-8872-be62c7aa833b" (UID: "26036a99-f656-4eb9-8872-be62c7aa833b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.267357 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26036a99-f656-4eb9-8872-be62c7aa833b" (UID: "26036a99-f656-4eb9-8872-be62c7aa833b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.271528 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-config-data" (OuterVolumeSpecName: "config-data") pod "26036a99-f656-4eb9-8872-be62c7aa833b" (UID: "26036a99-f656-4eb9-8872-be62c7aa833b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.325222 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr7ll\" (UniqueName: \"kubernetes.io/projected/26036a99-f656-4eb9-8872-be62c7aa833b-kube-api-access-fr7ll\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.325299 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.325313 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.325328 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26036a99-f656-4eb9-8872-be62c7aa833b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.749286 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" event={"ID":"26036a99-f656-4eb9-8872-be62c7aa833b","Type":"ContainerDied","Data":"359db6bc976b9453af270bf6efba581a02756aeedf39d9a53e7cdb9d583e9cf3"} Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.749358 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="359db6bc976b9453af270bf6efba581a02756aeedf39d9a53e7cdb9d583e9cf3" Dec 02 20:33:52 crc kubenswrapper[4796]: I1202 20:33:52.749366 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.197664 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:33:53 crc kubenswrapper[4796]: E1202 20:33:53.198395 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26036a99-f656-4eb9-8872-be62c7aa833b" containerName="watcher-kuttl-db-sync" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.198409 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="26036a99-f656-4eb9-8872-be62c7aa833b" containerName="watcher-kuttl-db-sync" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.198597 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="26036a99-f656-4eb9-8872-be62c7aa833b" containerName="watcher-kuttl-db-sync" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.199192 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.201903 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-fnk6w" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.212446 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.212565 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.219720 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.223988 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.232305 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.244653 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.244911 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.245013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.245098 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99dd9\" (UniqueName: \"kubernetes.io/projected/2def312c-511b-4198-a20f-b27ea549d5db-kube-api-access-99dd9\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.245199 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae87438c-942d-4cdd-9a4d-999d85c68a0c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.245290 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2def312c-511b-4198-a20f-b27ea549d5db-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.245373 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d82j\" (UniqueName: \"kubernetes.io/projected/ae87438c-942d-4cdd-9a4d-999d85c68a0c-kube-api-access-4d82j\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.245498 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.245579 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.245648 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.254434 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.310564 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.312083 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.314789 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.318966 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.353332 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4249e1-cbfb-436a-b176-913d50a5f8e9-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.353475 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.353687 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4249e1-cbfb-436a-b176-913d50a5f8e9-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.353796 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.353829 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.353916 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-686h2\" (UniqueName: \"kubernetes.io/projected/bb4249e1-cbfb-436a-b176-913d50a5f8e9-kube-api-access-686h2\") pod \"watcher-kuttl-applier-0\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.353967 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.354201 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.354338 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.354396 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99dd9\" (UniqueName: \"kubernetes.io/projected/2def312c-511b-4198-a20f-b27ea549d5db-kube-api-access-99dd9\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.354478 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae87438c-942d-4cdd-9a4d-999d85c68a0c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.354514 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4249e1-cbfb-436a-b176-913d50a5f8e9-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.354533 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2def312c-511b-4198-a20f-b27ea549d5db-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.354562 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d82j\" (UniqueName: \"kubernetes.io/projected/ae87438c-942d-4cdd-9a4d-999d85c68a0c-kube-api-access-4d82j\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.360036 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.360111 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.360193 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.360374 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae87438c-942d-4cdd-9a4d-999d85c68a0c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.360647 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2def312c-511b-4198-a20f-b27ea549d5db-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.363277 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.364642 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.369630 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.370071 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d82j\" (UniqueName: \"kubernetes.io/projected/ae87438c-942d-4cdd-9a4d-999d85c68a0c-kube-api-access-4d82j\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.373892 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99dd9\" (UniqueName: \"kubernetes.io/projected/2def312c-511b-4198-a20f-b27ea549d5db-kube-api-access-99dd9\") pod \"watcher-kuttl-api-0\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.456092 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4249e1-cbfb-436a-b176-913d50a5f8e9-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.456168 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4249e1-cbfb-436a-b176-913d50a5f8e9-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.456205 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-686h2\" (UniqueName: \"kubernetes.io/projected/bb4249e1-cbfb-436a-b176-913d50a5f8e9-kube-api-access-686h2\") pod \"watcher-kuttl-applier-0\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.456300 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4249e1-cbfb-436a-b176-913d50a5f8e9-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.457323 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4249e1-cbfb-436a-b176-913d50a5f8e9-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.460927 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4249e1-cbfb-436a-b176-913d50a5f8e9-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.462940 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4249e1-cbfb-436a-b176-913d50a5f8e9-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.481320 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-686h2\" (UniqueName: \"kubernetes.io/projected/bb4249e1-cbfb-436a-b176-913d50a5f8e9-kube-api-access-686h2\") pod \"watcher-kuttl-applier-0\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.533025 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.555097 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:53 crc kubenswrapper[4796]: I1202 20:33:53.630735 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:33:54 crc kubenswrapper[4796]: I1202 20:33:54.109632 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:33:54 crc kubenswrapper[4796]: I1202 20:33:54.185791 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:33:54 crc kubenswrapper[4796]: I1202 20:33:54.193044 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:33:54 crc kubenswrapper[4796]: I1202 20:33:54.797599 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2def312c-511b-4198-a20f-b27ea549d5db","Type":"ContainerStarted","Data":"b0c7fd5192b5983a7d4a623344542a60682ae5485c43b19fc6029c3433ba2c48"} Dec 02 20:33:54 crc kubenswrapper[4796]: I1202 20:33:54.798312 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2def312c-511b-4198-a20f-b27ea549d5db","Type":"ContainerStarted","Data":"70b4481d9515813ff02ab44f0b88c71ea77e2ee6937fbc1429ffbd7babbc4f4b"} Dec 02 20:33:54 crc kubenswrapper[4796]: I1202 20:33:54.798334 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:54 crc kubenswrapper[4796]: I1202 20:33:54.798346 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2def312c-511b-4198-a20f-b27ea549d5db","Type":"ContainerStarted","Data":"67d1cc356303c741efe2d900465ac7329675cbc72c543da1a9b89a88d1effa42"} Dec 02 20:33:54 crc kubenswrapper[4796]: I1202 20:33:54.800981 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2def312c-511b-4198-a20f-b27ea549d5db" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.133:9322/\": dial tcp 10.217.0.133:9322: connect: connection refused" Dec 02 20:33:54 crc kubenswrapper[4796]: I1202 20:33:54.801479 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"ae87438c-942d-4cdd-9a4d-999d85c68a0c","Type":"ContainerStarted","Data":"fcc61ed5f456b711b2f9a7c5cd211b7f43c80501792c508f7646b6bafbdc2bb9"} Dec 02 20:33:54 crc kubenswrapper[4796]: I1202 20:33:54.802751 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"bb4249e1-cbfb-436a-b176-913d50a5f8e9","Type":"ContainerStarted","Data":"7d90da9281ea1494389b37bd8a21287d8541b7ca64503cc591f8ebfd556165bf"} Dec 02 20:33:54 crc kubenswrapper[4796]: I1202 20:33:54.821396 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.821370089 podStartE2EDuration="1.821370089s" podCreationTimestamp="2025-12-02 20:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:33:54.815831244 +0000 UTC m=+1317.819206788" watchObservedRunningTime="2025-12-02 20:33:54.821370089 +0000 UTC m=+1317.824745623" Dec 02 20:33:55 crc kubenswrapper[4796]: I1202 20:33:55.189065 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:33:55 crc kubenswrapper[4796]: I1202 20:33:55.189341 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:33:55 crc kubenswrapper[4796]: I1202 20:33:55.812031 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"ae87438c-942d-4cdd-9a4d-999d85c68a0c","Type":"ContainerStarted","Data":"19c5320dcb0fbb9c9fe028f070079bc855d68bc03882aa595b3f120b8ec164cc"} Dec 02 20:33:55 crc kubenswrapper[4796]: I1202 20:33:55.815449 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"bb4249e1-cbfb-436a-b176-913d50a5f8e9","Type":"ContainerStarted","Data":"8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b"} Dec 02 20:33:55 crc kubenswrapper[4796]: I1202 20:33:55.884202 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.7873112070000001 podStartE2EDuration="2.884175789s" podCreationTimestamp="2025-12-02 20:33:53 +0000 UTC" firstStartedPulling="2025-12-02 20:33:54.118443126 +0000 UTC m=+1317.121818660" lastFinishedPulling="2025-12-02 20:33:55.215307708 +0000 UTC m=+1318.218683242" observedRunningTime="2025-12-02 20:33:55.861922045 +0000 UTC m=+1318.865297579" watchObservedRunningTime="2025-12-02 20:33:55.884175789 +0000 UTC m=+1318.887551323" Dec 02 20:33:55 crc kubenswrapper[4796]: I1202 20:33:55.886655 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.869183107 podStartE2EDuration="2.886645889s" podCreationTimestamp="2025-12-02 20:33:53 +0000 UTC" firstStartedPulling="2025-12-02 20:33:54.201233948 +0000 UTC m=+1317.204609482" lastFinishedPulling="2025-12-02 20:33:55.21869673 +0000 UTC m=+1318.222072264" observedRunningTime="2025-12-02 20:33:55.879177717 +0000 UTC m=+1318.882553251" watchObservedRunningTime="2025-12-02 20:33:55.886645889 +0000 UTC m=+1318.890021423" Dec 02 20:33:58 crc kubenswrapper[4796]: I1202 20:33:58.268893 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:58 crc kubenswrapper[4796]: I1202 20:33:58.556886 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:33:58 crc kubenswrapper[4796]: I1202 20:33:58.631608 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:03 crc kubenswrapper[4796]: I1202 20:34:03.534235 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:03 crc kubenswrapper[4796]: I1202 20:34:03.556127 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:03 crc kubenswrapper[4796]: I1202 20:34:03.572362 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:03 crc kubenswrapper[4796]: I1202 20:34:03.573016 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:03 crc kubenswrapper[4796]: I1202 20:34:03.631386 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:03 crc kubenswrapper[4796]: I1202 20:34:03.659223 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:03 crc kubenswrapper[4796]: I1202 20:34:03.901151 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:03 crc kubenswrapper[4796]: I1202 20:34:03.908377 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:03 crc kubenswrapper[4796]: I1202 20:34:03.956644 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:03 crc kubenswrapper[4796]: I1202 20:34:03.959306 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:06 crc kubenswrapper[4796]: I1202 20:34:06.084586 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:34:06 crc kubenswrapper[4796]: I1202 20:34:06.085349 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="ceilometer-central-agent" containerID="cri-o://0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd" gracePeriod=30 Dec 02 20:34:06 crc kubenswrapper[4796]: I1202 20:34:06.085428 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="proxy-httpd" containerID="cri-o://14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd" gracePeriod=30 Dec 02 20:34:06 crc kubenswrapper[4796]: I1202 20:34:06.085493 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="sg-core" containerID="cri-o://0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547" gracePeriod=30 Dec 02 20:34:06 crc kubenswrapper[4796]: I1202 20:34:06.085534 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="ceilometer-notification-agent" containerID="cri-o://7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1" gracePeriod=30 Dec 02 20:34:06 crc kubenswrapper[4796]: I1202 20:34:06.926834 4796 generic.go:334] "Generic (PLEG): container finished" podID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerID="14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd" exitCode=0 Dec 02 20:34:06 crc kubenswrapper[4796]: I1202 20:34:06.927757 4796 generic.go:334] "Generic (PLEG): container finished" podID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerID="0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547" exitCode=2 Dec 02 20:34:06 crc kubenswrapper[4796]: I1202 20:34:06.927925 4796 generic.go:334] "Generic (PLEG): container finished" podID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerID="0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd" exitCode=0 Dec 02 20:34:06 crc kubenswrapper[4796]: I1202 20:34:06.927054 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48c6ccd1-ed03-4648-9633-e7fc3f806d55","Type":"ContainerDied","Data":"14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd"} Dec 02 20:34:06 crc kubenswrapper[4796]: I1202 20:34:06.928400 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48c6ccd1-ed03-4648-9633-e7fc3f806d55","Type":"ContainerDied","Data":"0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547"} Dec 02 20:34:06 crc kubenswrapper[4796]: I1202 20:34:06.928553 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48c6ccd1-ed03-4648-9633-e7fc3f806d55","Type":"ContainerDied","Data":"0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd"} Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.166867 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx"] Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.183069 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-wl7wx"] Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.192586 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher09d1-account-delete-h9n44"] Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.194197 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.201791 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher09d1-account-delete-h9n44"] Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.258670 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a3f0ca-1b67-45ad-b865-32854d933a2e-operator-scripts\") pod \"watcher09d1-account-delete-h9n44\" (UID: \"90a3f0ca-1b67-45ad-b865-32854d933a2e\") " pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.258758 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wnx4\" (UniqueName: \"kubernetes.io/projected/90a3f0ca-1b67-45ad-b865-32854d933a2e-kube-api-access-5wnx4\") pod \"watcher09d1-account-delete-h9n44\" (UID: \"90a3f0ca-1b67-45ad-b865-32854d933a2e\") " pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.280158 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26036a99-f656-4eb9-8872-be62c7aa833b" path="/var/lib/kubelet/pods/26036a99-f656-4eb9-8872-be62c7aa833b/volumes" Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.300808 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.301106 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="bb4249e1-cbfb-436a-b176-913d50a5f8e9" containerName="watcher-applier" containerID="cri-o://8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b" gracePeriod=30 Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.360273 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wnx4\" (UniqueName: \"kubernetes.io/projected/90a3f0ca-1b67-45ad-b865-32854d933a2e-kube-api-access-5wnx4\") pod \"watcher09d1-account-delete-h9n44\" (UID: \"90a3f0ca-1b67-45ad-b865-32854d933a2e\") " pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.360422 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a3f0ca-1b67-45ad-b865-32854d933a2e-operator-scripts\") pod \"watcher09d1-account-delete-h9n44\" (UID: \"90a3f0ca-1b67-45ad-b865-32854d933a2e\") " pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.362123 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a3f0ca-1b67-45ad-b865-32854d933a2e-operator-scripts\") pod \"watcher09d1-account-delete-h9n44\" (UID: \"90a3f0ca-1b67-45ad-b865-32854d933a2e\") " pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.393127 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.393445 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2def312c-511b-4198-a20f-b27ea549d5db" containerName="watcher-kuttl-api-log" containerID="cri-o://70b4481d9515813ff02ab44f0b88c71ea77e2ee6937fbc1429ffbd7babbc4f4b" gracePeriod=30 Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.393913 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2def312c-511b-4198-a20f-b27ea549d5db" containerName="watcher-api" containerID="cri-o://b0c7fd5192b5983a7d4a623344542a60682ae5485c43b19fc6029c3433ba2c48" gracePeriod=30 Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.404878 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wnx4\" (UniqueName: \"kubernetes.io/projected/90a3f0ca-1b67-45ad-b865-32854d933a2e-kube-api-access-5wnx4\") pod \"watcher09d1-account-delete-h9n44\" (UID: \"90a3f0ca-1b67-45ad-b865-32854d933a2e\") " pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.477946 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.478219 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="ae87438c-942d-4cdd-9a4d-999d85c68a0c" containerName="watcher-decision-engine" containerID="cri-o://19c5320dcb0fbb9c9fe028f070079bc855d68bc03882aa595b3f120b8ec164cc" gracePeriod=30 Dec 02 20:34:07 crc kubenswrapper[4796]: I1202 20:34:07.547832 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" Dec 02 20:34:08 crc kubenswrapper[4796]: W1202 20:34:08.071470 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90a3f0ca_1b67_45ad_b865_32854d933a2e.slice/crio-f7fba7edb7b0e637c2216ad98054ed45e842033f8aab3fd77b8900f72d9506da WatchSource:0}: Error finding container f7fba7edb7b0e637c2216ad98054ed45e842033f8aab3fd77b8900f72d9506da: Status 404 returned error can't find the container with id f7fba7edb7b0e637c2216ad98054ed45e842033f8aab3fd77b8900f72d9506da Dec 02 20:34:08 crc kubenswrapper[4796]: I1202 20:34:08.075500 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher09d1-account-delete-h9n44"] Dec 02 20:34:08 crc kubenswrapper[4796]: E1202 20:34:08.636302 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:34:08 crc kubenswrapper[4796]: I1202 20:34:08.637648 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2def312c-511b-4198-a20f-b27ea549d5db" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.133:9322/\": read tcp 10.217.0.2:44204->10.217.0.133:9322: read: connection reset by peer" Dec 02 20:34:08 crc kubenswrapper[4796]: I1202 20:34:08.637702 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2def312c-511b-4198-a20f-b27ea549d5db" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.133:9322/\": read tcp 10.217.0.2:44206->10.217.0.133:9322: read: connection reset by peer" Dec 02 20:34:08 crc kubenswrapper[4796]: E1202 20:34:08.638110 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:34:08 crc kubenswrapper[4796]: E1202 20:34:08.641912 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:34:08 crc kubenswrapper[4796]: E1202 20:34:08.641964 4796 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="bb4249e1-cbfb-436a-b176-913d50a5f8e9" containerName="watcher-applier" Dec 02 20:34:08 crc kubenswrapper[4796]: I1202 20:34:08.962822 4796 generic.go:334] "Generic (PLEG): container finished" podID="2def312c-511b-4198-a20f-b27ea549d5db" containerID="b0c7fd5192b5983a7d4a623344542a60682ae5485c43b19fc6029c3433ba2c48" exitCode=0 Dec 02 20:34:08 crc kubenswrapper[4796]: I1202 20:34:08.963280 4796 generic.go:334] "Generic (PLEG): container finished" podID="2def312c-511b-4198-a20f-b27ea549d5db" containerID="70b4481d9515813ff02ab44f0b88c71ea77e2ee6937fbc1429ffbd7babbc4f4b" exitCode=143 Dec 02 20:34:08 crc kubenswrapper[4796]: I1202 20:34:08.962988 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2def312c-511b-4198-a20f-b27ea549d5db","Type":"ContainerDied","Data":"b0c7fd5192b5983a7d4a623344542a60682ae5485c43b19fc6029c3433ba2c48"} Dec 02 20:34:08 crc kubenswrapper[4796]: I1202 20:34:08.963390 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2def312c-511b-4198-a20f-b27ea549d5db","Type":"ContainerDied","Data":"70b4481d9515813ff02ab44f0b88c71ea77e2ee6937fbc1429ffbd7babbc4f4b"} Dec 02 20:34:08 crc kubenswrapper[4796]: I1202 20:34:08.966510 4796 generic.go:334] "Generic (PLEG): container finished" podID="90a3f0ca-1b67-45ad-b865-32854d933a2e" containerID="f9545693076834672c6f55810f5186b24f6eebbe1beef49b672deb1d8bf2932f" exitCode=0 Dec 02 20:34:08 crc kubenswrapper[4796]: I1202 20:34:08.966575 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" event={"ID":"90a3f0ca-1b67-45ad-b865-32854d933a2e","Type":"ContainerDied","Data":"f9545693076834672c6f55810f5186b24f6eebbe1beef49b672deb1d8bf2932f"} Dec 02 20:34:08 crc kubenswrapper[4796]: I1202 20:34:08.966609 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" event={"ID":"90a3f0ca-1b67-45ad-b865-32854d933a2e","Type":"ContainerStarted","Data":"f7fba7edb7b0e637c2216ad98054ed45e842033f8aab3fd77b8900f72d9506da"} Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.129112 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.211240 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-combined-ca-bundle\") pod \"2def312c-511b-4198-a20f-b27ea549d5db\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.211395 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-custom-prometheus-ca\") pod \"2def312c-511b-4198-a20f-b27ea549d5db\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.211454 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99dd9\" (UniqueName: \"kubernetes.io/projected/2def312c-511b-4198-a20f-b27ea549d5db-kube-api-access-99dd9\") pod \"2def312c-511b-4198-a20f-b27ea549d5db\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.211504 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2def312c-511b-4198-a20f-b27ea549d5db-logs\") pod \"2def312c-511b-4198-a20f-b27ea549d5db\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.211536 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-config-data\") pod \"2def312c-511b-4198-a20f-b27ea549d5db\" (UID: \"2def312c-511b-4198-a20f-b27ea549d5db\") " Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.216658 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2def312c-511b-4198-a20f-b27ea549d5db-logs" (OuterVolumeSpecName: "logs") pod "2def312c-511b-4198-a20f-b27ea549d5db" (UID: "2def312c-511b-4198-a20f-b27ea549d5db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.220716 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2def312c-511b-4198-a20f-b27ea549d5db-kube-api-access-99dd9" (OuterVolumeSpecName: "kube-api-access-99dd9") pod "2def312c-511b-4198-a20f-b27ea549d5db" (UID: "2def312c-511b-4198-a20f-b27ea549d5db"). InnerVolumeSpecName "kube-api-access-99dd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.240025 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "2def312c-511b-4198-a20f-b27ea549d5db" (UID: "2def312c-511b-4198-a20f-b27ea549d5db"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.249236 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2def312c-511b-4198-a20f-b27ea549d5db" (UID: "2def312c-511b-4198-a20f-b27ea549d5db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.261162 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-config-data" (OuterVolumeSpecName: "config-data") pod "2def312c-511b-4198-a20f-b27ea549d5db" (UID: "2def312c-511b-4198-a20f-b27ea549d5db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.313934 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.313987 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.313997 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99dd9\" (UniqueName: \"kubernetes.io/projected/2def312c-511b-4198-a20f-b27ea549d5db-kube-api-access-99dd9\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.314008 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2def312c-511b-4198-a20f-b27ea549d5db-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.314056 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2def312c-511b-4198-a20f-b27ea549d5db-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.980167 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2def312c-511b-4198-a20f-b27ea549d5db","Type":"ContainerDied","Data":"67d1cc356303c741efe2d900465ac7329675cbc72c543da1a9b89a88d1effa42"} Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.980237 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:09 crc kubenswrapper[4796]: I1202 20:34:09.980525 4796 scope.go:117] "RemoveContainer" containerID="b0c7fd5192b5983a7d4a623344542a60682ae5485c43b19fc6029c3433ba2c48" Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.008792 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.013309 4796 scope.go:117] "RemoveContainer" containerID="70b4481d9515813ff02ab44f0b88c71ea77e2ee6937fbc1429ffbd7babbc4f4b" Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.017182 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.440663 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.538729 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a3f0ca-1b67-45ad-b865-32854d933a2e-operator-scripts\") pod \"90a3f0ca-1b67-45ad-b865-32854d933a2e\" (UID: \"90a3f0ca-1b67-45ad-b865-32854d933a2e\") " Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.538875 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wnx4\" (UniqueName: \"kubernetes.io/projected/90a3f0ca-1b67-45ad-b865-32854d933a2e-kube-api-access-5wnx4\") pod \"90a3f0ca-1b67-45ad-b865-32854d933a2e\" (UID: \"90a3f0ca-1b67-45ad-b865-32854d933a2e\") " Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.539590 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a3f0ca-1b67-45ad-b865-32854d933a2e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90a3f0ca-1b67-45ad-b865-32854d933a2e" (UID: "90a3f0ca-1b67-45ad-b865-32854d933a2e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.544680 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a3f0ca-1b67-45ad-b865-32854d933a2e-kube-api-access-5wnx4" (OuterVolumeSpecName: "kube-api-access-5wnx4") pod "90a3f0ca-1b67-45ad-b865-32854d933a2e" (UID: "90a3f0ca-1b67-45ad-b865-32854d933a2e"). InnerVolumeSpecName "kube-api-access-5wnx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.640956 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a3f0ca-1b67-45ad-b865-32854d933a2e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.641029 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wnx4\" (UniqueName: \"kubernetes.io/projected/90a3f0ca-1b67-45ad-b865-32854d933a2e-kube-api-access-5wnx4\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.801751 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.946650 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvqrx\" (UniqueName: \"kubernetes.io/projected/48c6ccd1-ed03-4648-9633-e7fc3f806d55-kube-api-access-tvqrx\") pod \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.946704 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48c6ccd1-ed03-4648-9633-e7fc3f806d55-run-httpd\") pod \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.946816 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48c6ccd1-ed03-4648-9633-e7fc3f806d55-log-httpd\") pod \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.946882 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-combined-ca-bundle\") pod \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.946917 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-sg-core-conf-yaml\") pod \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.946956 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-scripts\") pod \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.946974 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-ceilometer-tls-certs\") pod \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.947022 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-config-data\") pod \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\" (UID: \"48c6ccd1-ed03-4648-9633-e7fc3f806d55\") " Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.947833 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48c6ccd1-ed03-4648-9633-e7fc3f806d55-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "48c6ccd1-ed03-4648-9633-e7fc3f806d55" (UID: "48c6ccd1-ed03-4648-9633-e7fc3f806d55"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.947850 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48c6ccd1-ed03-4648-9633-e7fc3f806d55-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "48c6ccd1-ed03-4648-9633-e7fc3f806d55" (UID: "48c6ccd1-ed03-4648-9633-e7fc3f806d55"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.955489 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c6ccd1-ed03-4648-9633-e7fc3f806d55-kube-api-access-tvqrx" (OuterVolumeSpecName: "kube-api-access-tvqrx") pod "48c6ccd1-ed03-4648-9633-e7fc3f806d55" (UID: "48c6ccd1-ed03-4648-9633-e7fc3f806d55"). InnerVolumeSpecName "kube-api-access-tvqrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.958913 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-scripts" (OuterVolumeSpecName: "scripts") pod "48c6ccd1-ed03-4648-9633-e7fc3f806d55" (UID: "48c6ccd1-ed03-4648-9633-e7fc3f806d55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:10 crc kubenswrapper[4796]: I1202 20:34:10.999263 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "48c6ccd1-ed03-4648-9633-e7fc3f806d55" (UID: "48c6ccd1-ed03-4648-9633-e7fc3f806d55"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.006619 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" event={"ID":"90a3f0ca-1b67-45ad-b865-32854d933a2e","Type":"ContainerDied","Data":"f7fba7edb7b0e637c2216ad98054ed45e842033f8aab3fd77b8900f72d9506da"} Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.006666 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7fba7edb7b0e637c2216ad98054ed45e842033f8aab3fd77b8900f72d9506da" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.006726 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher09d1-account-delete-h9n44" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.022134 4796 generic.go:334] "Generic (PLEG): container finished" podID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerID="7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1" exitCode=0 Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.022185 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48c6ccd1-ed03-4648-9633-e7fc3f806d55","Type":"ContainerDied","Data":"7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1"} Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.022216 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48c6ccd1-ed03-4648-9633-e7fc3f806d55","Type":"ContainerDied","Data":"c26615b6106981aefdb225eea243bf72747d54937c65d6aa4375233f6d6aae34"} Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.022236 4796 scope.go:117] "RemoveContainer" containerID="14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.022372 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.030281 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48c6ccd1-ed03-4648-9633-e7fc3f806d55" (UID: "48c6ccd1-ed03-4648-9633-e7fc3f806d55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.039363 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "48c6ccd1-ed03-4648-9633-e7fc3f806d55" (UID: "48c6ccd1-ed03-4648-9633-e7fc3f806d55"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.049189 4796 scope.go:117] "RemoveContainer" containerID="0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.050724 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvqrx\" (UniqueName: \"kubernetes.io/projected/48c6ccd1-ed03-4648-9633-e7fc3f806d55-kube-api-access-tvqrx\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.050783 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48c6ccd1-ed03-4648-9633-e7fc3f806d55-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.050803 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48c6ccd1-ed03-4648-9633-e7fc3f806d55-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.050820 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.050838 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.050854 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.050870 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.076810 4796 scope.go:117] "RemoveContainer" containerID="7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.077744 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-config-data" (OuterVolumeSpecName: "config-data") pod "48c6ccd1-ed03-4648-9633-e7fc3f806d55" (UID: "48c6ccd1-ed03-4648-9633-e7fc3f806d55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.098228 4796 scope.go:117] "RemoveContainer" containerID="0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.118835 4796 scope.go:117] "RemoveContainer" containerID="14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd" Dec 02 20:34:11 crc kubenswrapper[4796]: E1202 20:34:11.119214 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd\": container with ID starting with 14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd not found: ID does not exist" containerID="14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.119277 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd"} err="failed to get container status \"14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd\": rpc error: code = NotFound desc = could not find container \"14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd\": container with ID starting with 14595e5aec24ed1d87d55f37b9cd2258d62e23202c6852f87b1dbfd03d7ac5bd not found: ID does not exist" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.119302 4796 scope.go:117] "RemoveContainer" containerID="0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547" Dec 02 20:34:11 crc kubenswrapper[4796]: E1202 20:34:11.119523 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547\": container with ID starting with 0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547 not found: ID does not exist" containerID="0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.119542 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547"} err="failed to get container status \"0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547\": rpc error: code = NotFound desc = could not find container \"0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547\": container with ID starting with 0ea0be5300a199165f7d151cf1b9cdc7423a14a6de16d2f81877609da9b00547 not found: ID does not exist" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.119556 4796 scope.go:117] "RemoveContainer" containerID="7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1" Dec 02 20:34:11 crc kubenswrapper[4796]: E1202 20:34:11.119738 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1\": container with ID starting with 7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1 not found: ID does not exist" containerID="7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.119761 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1"} err="failed to get container status \"7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1\": rpc error: code = NotFound desc = could not find container \"7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1\": container with ID starting with 7c20240715c75948ce34069d0dd0d6d672236917aa10e4e75672e4f252a3f0d1 not found: ID does not exist" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.119774 4796 scope.go:117] "RemoveContainer" containerID="0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd" Dec 02 20:34:11 crc kubenswrapper[4796]: E1202 20:34:11.119953 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd\": container with ID starting with 0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd not found: ID does not exist" containerID="0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.119977 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd"} err="failed to get container status \"0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd\": rpc error: code = NotFound desc = could not find container \"0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd\": container with ID starting with 0e45ecdae9eb0e7283fae6c137dfb6a7d27746a4fa6fe833f0fe33585f647fbd not found: ID does not exist" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.152699 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c6ccd1-ed03-4648-9633-e7fc3f806d55-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.276709 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2def312c-511b-4198-a20f-b27ea549d5db" path="/var/lib/kubelet/pods/2def312c-511b-4198-a20f-b27ea549d5db/volumes" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.341861 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.353785 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.405534 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:34:11 crc kubenswrapper[4796]: E1202 20:34:11.406013 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a3f0ca-1b67-45ad-b865-32854d933a2e" containerName="mariadb-account-delete" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406050 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a3f0ca-1b67-45ad-b865-32854d933a2e" containerName="mariadb-account-delete" Dec 02 20:34:11 crc kubenswrapper[4796]: E1202 20:34:11.406082 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="ceilometer-central-agent" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406090 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="ceilometer-central-agent" Dec 02 20:34:11 crc kubenswrapper[4796]: E1202 20:34:11.406101 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2def312c-511b-4198-a20f-b27ea549d5db" containerName="watcher-api" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406107 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2def312c-511b-4198-a20f-b27ea549d5db" containerName="watcher-api" Dec 02 20:34:11 crc kubenswrapper[4796]: E1202 20:34:11.406115 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2def312c-511b-4198-a20f-b27ea549d5db" containerName="watcher-kuttl-api-log" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406121 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2def312c-511b-4198-a20f-b27ea549d5db" containerName="watcher-kuttl-api-log" Dec 02 20:34:11 crc kubenswrapper[4796]: E1202 20:34:11.406133 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="ceilometer-notification-agent" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406140 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="ceilometer-notification-agent" Dec 02 20:34:11 crc kubenswrapper[4796]: E1202 20:34:11.406152 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="proxy-httpd" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406158 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="proxy-httpd" Dec 02 20:34:11 crc kubenswrapper[4796]: E1202 20:34:11.406170 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="sg-core" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406177 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="sg-core" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406354 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2def312c-511b-4198-a20f-b27ea549d5db" containerName="watcher-kuttl-api-log" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406396 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="ceilometer-notification-agent" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406411 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="ceilometer-central-agent" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406427 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="sg-core" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406446 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" containerName="proxy-httpd" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406461 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2def312c-511b-4198-a20f-b27ea549d5db" containerName="watcher-api" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.406487 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a3f0ca-1b67-45ad-b865-32854d933a2e" containerName="mariadb-account-delete" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.420696 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.420816 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.423103 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.423326 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.423510 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.568644 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-config-data\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.568695 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrzzx\" (UniqueName: \"kubernetes.io/projected/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-kube-api-access-hrzzx\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.568738 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-log-httpd\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.568834 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.568864 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.568889 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-run-httpd\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.568931 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-scripts\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.568969 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.670197 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-scripts\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.670330 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.670375 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-config-data\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.670411 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrzzx\" (UniqueName: \"kubernetes.io/projected/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-kube-api-access-hrzzx\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.670437 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-log-httpd\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.670514 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.670555 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.670618 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-run-httpd\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.671241 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-run-httpd\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.671318 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-log-httpd\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.674922 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-scripts\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.675329 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.675747 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.676345 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.677731 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-config-data\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.693094 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrzzx\" (UniqueName: \"kubernetes.io/projected/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-kube-api-access-hrzzx\") pod \"ceilometer-0\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.738735 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.893148 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.975184 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4249e1-cbfb-436a-b176-913d50a5f8e9-combined-ca-bundle\") pod \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.975315 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-686h2\" (UniqueName: \"kubernetes.io/projected/bb4249e1-cbfb-436a-b176-913d50a5f8e9-kube-api-access-686h2\") pod \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.975382 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4249e1-cbfb-436a-b176-913d50a5f8e9-config-data\") pod \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.975405 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4249e1-cbfb-436a-b176-913d50a5f8e9-logs\") pod \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\" (UID: \"bb4249e1-cbfb-436a-b176-913d50a5f8e9\") " Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.976343 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4249e1-cbfb-436a-b176-913d50a5f8e9-logs" (OuterVolumeSpecName: "logs") pod "bb4249e1-cbfb-436a-b176-913d50a5f8e9" (UID: "bb4249e1-cbfb-436a-b176-913d50a5f8e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:34:11 crc kubenswrapper[4796]: I1202 20:34:11.987522 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4249e1-cbfb-436a-b176-913d50a5f8e9-kube-api-access-686h2" (OuterVolumeSpecName: "kube-api-access-686h2") pod "bb4249e1-cbfb-436a-b176-913d50a5f8e9" (UID: "bb4249e1-cbfb-436a-b176-913d50a5f8e9"). InnerVolumeSpecName "kube-api-access-686h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.007775 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4249e1-cbfb-436a-b176-913d50a5f8e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb4249e1-cbfb-436a-b176-913d50a5f8e9" (UID: "bb4249e1-cbfb-436a-b176-913d50a5f8e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.022571 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4249e1-cbfb-436a-b176-913d50a5f8e9-config-data" (OuterVolumeSpecName: "config-data") pod "bb4249e1-cbfb-436a-b176-913d50a5f8e9" (UID: "bb4249e1-cbfb-436a-b176-913d50a5f8e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.033625 4796 generic.go:334] "Generic (PLEG): container finished" podID="bb4249e1-cbfb-436a-b176-913d50a5f8e9" containerID="8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b" exitCode=0 Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.033706 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"bb4249e1-cbfb-436a-b176-913d50a5f8e9","Type":"ContainerDied","Data":"8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b"} Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.033706 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.033754 4796 scope.go:117] "RemoveContainer" containerID="8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b" Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.033740 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"bb4249e1-cbfb-436a-b176-913d50a5f8e9","Type":"ContainerDied","Data":"7d90da9281ea1494389b37bd8a21287d8541b7ca64503cc591f8ebfd556165bf"} Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.073637 4796 scope.go:117] "RemoveContainer" containerID="8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b" Dec 02 20:34:12 crc kubenswrapper[4796]: E1202 20:34:12.074736 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b\": container with ID starting with 8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b not found: ID does not exist" containerID="8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b" Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.074791 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b"} err="failed to get container status \"8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b\": rpc error: code = NotFound desc = could not find container \"8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b\": container with ID starting with 8df7c619b1b661cdf6b9168ea6d4ab6152ca182430d7e38a00b3b04f6fa8490b not found: ID does not exist" Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.078164 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4249e1-cbfb-436a-b176-913d50a5f8e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.078199 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-686h2\" (UniqueName: \"kubernetes.io/projected/bb4249e1-cbfb-436a-b176-913d50a5f8e9-kube-api-access-686h2\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.078211 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4249e1-cbfb-436a-b176-913d50a5f8e9-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.078220 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4249e1-cbfb-436a-b176-913d50a5f8e9-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.084909 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.096329 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.238760 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-fztln"] Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.246958 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-fztln"] Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.261245 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-09d1-account-create-update-6d25l"] Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.270184 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher09d1-account-delete-h9n44"] Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.287560 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-09d1-account-create-update-6d25l"] Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.295359 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher09d1-account-delete-h9n44"] Dec 02 20:34:12 crc kubenswrapper[4796]: I1202 20:34:12.301029 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.047835 4796 generic.go:334] "Generic (PLEG): container finished" podID="ae87438c-942d-4cdd-9a4d-999d85c68a0c" containerID="19c5320dcb0fbb9c9fe028f070079bc855d68bc03882aa595b3f120b8ec164cc" exitCode=0 Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.049216 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"ae87438c-942d-4cdd-9a4d-999d85c68a0c","Type":"ContainerDied","Data":"19c5320dcb0fbb9c9fe028f070079bc855d68bc03882aa595b3f120b8ec164cc"} Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.051718 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0773fae9-ae8b-4e87-bec4-a1d605ce9e99","Type":"ContainerStarted","Data":"92b0ad087073f09a840e689df55bc2c4719d571e8d9f1423b1bb427f5a5ed07c"} Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.286792 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="246b4725-b3de-4c2d-850f-8c02c1066622" path="/var/lib/kubelet/pods/246b4725-b3de-4c2d-850f-8c02c1066622/volumes" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.287387 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48c6ccd1-ed03-4648-9633-e7fc3f806d55" path="/var/lib/kubelet/pods/48c6ccd1-ed03-4648-9633-e7fc3f806d55/volumes" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.288027 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e39c55c-f8eb-4fbf-8c26-52189974b68c" path="/var/lib/kubelet/pods/4e39c55c-f8eb-4fbf-8c26-52189974b68c/volumes" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.292654 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a3f0ca-1b67-45ad-b865-32854d933a2e" path="/var/lib/kubelet/pods/90a3f0ca-1b67-45ad-b865-32854d933a2e/volumes" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.293148 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb4249e1-cbfb-436a-b176-913d50a5f8e9" path="/var/lib/kubelet/pods/bb4249e1-cbfb-436a-b176-913d50a5f8e9/volumes" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.590106 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.704561 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae87438c-942d-4cdd-9a4d-999d85c68a0c-logs\") pod \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.704701 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d82j\" (UniqueName: \"kubernetes.io/projected/ae87438c-942d-4cdd-9a4d-999d85c68a0c-kube-api-access-4d82j\") pod \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.704721 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-config-data\") pod \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.704809 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-custom-prometheus-ca\") pod \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.704825 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-combined-ca-bundle\") pod \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\" (UID: \"ae87438c-942d-4cdd-9a4d-999d85c68a0c\") " Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.705831 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae87438c-942d-4cdd-9a4d-999d85c68a0c-logs" (OuterVolumeSpecName: "logs") pod "ae87438c-942d-4cdd-9a4d-999d85c68a0c" (UID: "ae87438c-942d-4cdd-9a4d-999d85c68a0c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.710456 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae87438c-942d-4cdd-9a4d-999d85c68a0c-kube-api-access-4d82j" (OuterVolumeSpecName: "kube-api-access-4d82j") pod "ae87438c-942d-4cdd-9a4d-999d85c68a0c" (UID: "ae87438c-942d-4cdd-9a4d-999d85c68a0c"). InnerVolumeSpecName "kube-api-access-4d82j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.726624 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ae87438c-942d-4cdd-9a4d-999d85c68a0c" (UID: "ae87438c-942d-4cdd-9a4d-999d85c68a0c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.732376 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae87438c-942d-4cdd-9a4d-999d85c68a0c" (UID: "ae87438c-942d-4cdd-9a4d-999d85c68a0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.774913 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-config-data" (OuterVolumeSpecName: "config-data") pod "ae87438c-942d-4cdd-9a4d-999d85c68a0c" (UID: "ae87438c-942d-4cdd-9a4d-999d85c68a0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.807366 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae87438c-942d-4cdd-9a4d-999d85c68a0c-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.807405 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d82j\" (UniqueName: \"kubernetes.io/projected/ae87438c-942d-4cdd-9a4d-999d85c68a0c-kube-api-access-4d82j\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.807418 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.807428 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:13 crc kubenswrapper[4796]: I1202 20:34:13.807439 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae87438c-942d-4cdd-9a4d-999d85c68a0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:14 crc kubenswrapper[4796]: I1202 20:34:14.063558 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"ae87438c-942d-4cdd-9a4d-999d85c68a0c","Type":"ContainerDied","Data":"fcc61ed5f456b711b2f9a7c5cd211b7f43c80501792c508f7646b6bafbdc2bb9"} Dec 02 20:34:14 crc kubenswrapper[4796]: I1202 20:34:14.063575 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:14 crc kubenswrapper[4796]: I1202 20:34:14.064042 4796 scope.go:117] "RemoveContainer" containerID="19c5320dcb0fbb9c9fe028f070079bc855d68bc03882aa595b3f120b8ec164cc" Dec 02 20:34:14 crc kubenswrapper[4796]: I1202 20:34:14.066588 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0773fae9-ae8b-4e87-bec4-a1d605ce9e99","Type":"ContainerStarted","Data":"11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4"} Dec 02 20:34:14 crc kubenswrapper[4796]: I1202 20:34:14.066621 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0773fae9-ae8b-4e87-bec4-a1d605ce9e99","Type":"ContainerStarted","Data":"c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f"} Dec 02 20:34:14 crc kubenswrapper[4796]: I1202 20:34:14.110024 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:34:14 crc kubenswrapper[4796]: I1202 20:34:14.116542 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.081032 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0773fae9-ae8b-4e87-bec4-a1d605ce9e99","Type":"ContainerStarted","Data":"260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d"} Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.279120 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae87438c-942d-4cdd-9a4d-999d85c68a0c" path="/var/lib/kubelet/pods/ae87438c-942d-4cdd-9a4d-999d85c68a0c/volumes" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.425013 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-fzp22"] Dec 02 20:34:15 crc kubenswrapper[4796]: E1202 20:34:15.425497 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4249e1-cbfb-436a-b176-913d50a5f8e9" containerName="watcher-applier" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.425514 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4249e1-cbfb-436a-b176-913d50a5f8e9" containerName="watcher-applier" Dec 02 20:34:15 crc kubenswrapper[4796]: E1202 20:34:15.425530 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae87438c-942d-4cdd-9a4d-999d85c68a0c" containerName="watcher-decision-engine" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.425537 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae87438c-942d-4cdd-9a4d-999d85c68a0c" containerName="watcher-decision-engine" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.425735 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae87438c-942d-4cdd-9a4d-999d85c68a0c" containerName="watcher-decision-engine" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.425761 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4249e1-cbfb-436a-b176-913d50a5f8e9" containerName="watcher-applier" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.426388 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-fzp22" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.447105 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-fzp22"] Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.454752 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs"] Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.456658 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.458636 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.470960 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs"] Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.538554 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095471de-2b47-461b-a709-3254f5056853-operator-scripts\") pod \"watcher-db-create-fzp22\" (UID: \"095471de-2b47-461b-a709-3254f5056853\") " pod="watcher-kuttl-default/watcher-db-create-fzp22" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.538660 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b15a422-e077-4fab-b811-7f1804f6df2b-operator-scripts\") pod \"watcher-8aa5-account-create-update-h7mxs\" (UID: \"6b15a422-e077-4fab-b811-7f1804f6df2b\") " pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.538702 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48fks\" (UniqueName: \"kubernetes.io/projected/6b15a422-e077-4fab-b811-7f1804f6df2b-kube-api-access-48fks\") pod \"watcher-8aa5-account-create-update-h7mxs\" (UID: \"6b15a422-e077-4fab-b811-7f1804f6df2b\") " pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.538771 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drs46\" (UniqueName: \"kubernetes.io/projected/095471de-2b47-461b-a709-3254f5056853-kube-api-access-drs46\") pod \"watcher-db-create-fzp22\" (UID: \"095471de-2b47-461b-a709-3254f5056853\") " pod="watcher-kuttl-default/watcher-db-create-fzp22" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.641660 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b15a422-e077-4fab-b811-7f1804f6df2b-operator-scripts\") pod \"watcher-8aa5-account-create-update-h7mxs\" (UID: \"6b15a422-e077-4fab-b811-7f1804f6df2b\") " pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.641742 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48fks\" (UniqueName: \"kubernetes.io/projected/6b15a422-e077-4fab-b811-7f1804f6df2b-kube-api-access-48fks\") pod \"watcher-8aa5-account-create-update-h7mxs\" (UID: \"6b15a422-e077-4fab-b811-7f1804f6df2b\") " pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.641875 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drs46\" (UniqueName: \"kubernetes.io/projected/095471de-2b47-461b-a709-3254f5056853-kube-api-access-drs46\") pod \"watcher-db-create-fzp22\" (UID: \"095471de-2b47-461b-a709-3254f5056853\") " pod="watcher-kuttl-default/watcher-db-create-fzp22" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.642010 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095471de-2b47-461b-a709-3254f5056853-operator-scripts\") pod \"watcher-db-create-fzp22\" (UID: \"095471de-2b47-461b-a709-3254f5056853\") " pod="watcher-kuttl-default/watcher-db-create-fzp22" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.643318 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b15a422-e077-4fab-b811-7f1804f6df2b-operator-scripts\") pod \"watcher-8aa5-account-create-update-h7mxs\" (UID: \"6b15a422-e077-4fab-b811-7f1804f6df2b\") " pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.643535 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095471de-2b47-461b-a709-3254f5056853-operator-scripts\") pod \"watcher-db-create-fzp22\" (UID: \"095471de-2b47-461b-a709-3254f5056853\") " pod="watcher-kuttl-default/watcher-db-create-fzp22" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.670475 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drs46\" (UniqueName: \"kubernetes.io/projected/095471de-2b47-461b-a709-3254f5056853-kube-api-access-drs46\") pod \"watcher-db-create-fzp22\" (UID: \"095471de-2b47-461b-a709-3254f5056853\") " pod="watcher-kuttl-default/watcher-db-create-fzp22" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.679834 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48fks\" (UniqueName: \"kubernetes.io/projected/6b15a422-e077-4fab-b811-7f1804f6df2b-kube-api-access-48fks\") pod \"watcher-8aa5-account-create-update-h7mxs\" (UID: \"6b15a422-e077-4fab-b811-7f1804f6df2b\") " pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.748395 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-fzp22" Dec 02 20:34:15 crc kubenswrapper[4796]: I1202 20:34:15.779611 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" Dec 02 20:34:16 crc kubenswrapper[4796]: I1202 20:34:16.323509 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-fzp22"] Dec 02 20:34:16 crc kubenswrapper[4796]: I1202 20:34:16.330893 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs"] Dec 02 20:34:17 crc kubenswrapper[4796]: I1202 20:34:17.102764 4796 generic.go:334] "Generic (PLEG): container finished" podID="6b15a422-e077-4fab-b811-7f1804f6df2b" containerID="81ce03f29d50509e8e84d17d871ccc0e267c33f9774abf2b183fac33dbb8a41b" exitCode=0 Dec 02 20:34:17 crc kubenswrapper[4796]: I1202 20:34:17.102837 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" event={"ID":"6b15a422-e077-4fab-b811-7f1804f6df2b","Type":"ContainerDied","Data":"81ce03f29d50509e8e84d17d871ccc0e267c33f9774abf2b183fac33dbb8a41b"} Dec 02 20:34:17 crc kubenswrapper[4796]: I1202 20:34:17.103396 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" event={"ID":"6b15a422-e077-4fab-b811-7f1804f6df2b","Type":"ContainerStarted","Data":"9112961fc8c40a42570614721601236fa34dcc01f743601685152ccc91d18109"} Dec 02 20:34:17 crc kubenswrapper[4796]: I1202 20:34:17.106683 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0773fae9-ae8b-4e87-bec4-a1d605ce9e99","Type":"ContainerStarted","Data":"2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6"} Dec 02 20:34:17 crc kubenswrapper[4796]: I1202 20:34:17.106915 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:17 crc kubenswrapper[4796]: I1202 20:34:17.109120 4796 generic.go:334] "Generic (PLEG): container finished" podID="095471de-2b47-461b-a709-3254f5056853" containerID="29db3627665f49b74aee13a1cf921981aefd405278d01f25c70845cd5b246bce" exitCode=0 Dec 02 20:34:17 crc kubenswrapper[4796]: I1202 20:34:17.109184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-fzp22" event={"ID":"095471de-2b47-461b-a709-3254f5056853","Type":"ContainerDied","Data":"29db3627665f49b74aee13a1cf921981aefd405278d01f25c70845cd5b246bce"} Dec 02 20:34:17 crc kubenswrapper[4796]: I1202 20:34:17.109223 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-fzp22" event={"ID":"095471de-2b47-461b-a709-3254f5056853","Type":"ContainerStarted","Data":"bf9f607714faa9250c1ba1acef57d99a72f88f36f7e94ee8e7cecbf5e52353d9"} Dec 02 20:34:17 crc kubenswrapper[4796]: I1202 20:34:17.165695 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.109451872 podStartE2EDuration="6.165673069s" podCreationTimestamp="2025-12-02 20:34:11 +0000 UTC" firstStartedPulling="2025-12-02 20:34:12.283534286 +0000 UTC m=+1335.286909820" lastFinishedPulling="2025-12-02 20:34:16.339755483 +0000 UTC m=+1339.343131017" observedRunningTime="2025-12-02 20:34:17.148536451 +0000 UTC m=+1340.151912055" watchObservedRunningTime="2025-12-02 20:34:17.165673069 +0000 UTC m=+1340.169048603" Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.564732 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.613285 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b15a422-e077-4fab-b811-7f1804f6df2b-operator-scripts\") pod \"6b15a422-e077-4fab-b811-7f1804f6df2b\" (UID: \"6b15a422-e077-4fab-b811-7f1804f6df2b\") " Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.613355 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48fks\" (UniqueName: \"kubernetes.io/projected/6b15a422-e077-4fab-b811-7f1804f6df2b-kube-api-access-48fks\") pod \"6b15a422-e077-4fab-b811-7f1804f6df2b\" (UID: \"6b15a422-e077-4fab-b811-7f1804f6df2b\") " Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.613970 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b15a422-e077-4fab-b811-7f1804f6df2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b15a422-e077-4fab-b811-7f1804f6df2b" (UID: "6b15a422-e077-4fab-b811-7f1804f6df2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.619977 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b15a422-e077-4fab-b811-7f1804f6df2b-kube-api-access-48fks" (OuterVolumeSpecName: "kube-api-access-48fks") pod "6b15a422-e077-4fab-b811-7f1804f6df2b" (UID: "6b15a422-e077-4fab-b811-7f1804f6df2b"). InnerVolumeSpecName "kube-api-access-48fks". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.676875 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-fzp22" Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.715089 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drs46\" (UniqueName: \"kubernetes.io/projected/095471de-2b47-461b-a709-3254f5056853-kube-api-access-drs46\") pod \"095471de-2b47-461b-a709-3254f5056853\" (UID: \"095471de-2b47-461b-a709-3254f5056853\") " Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.715162 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095471de-2b47-461b-a709-3254f5056853-operator-scripts\") pod \"095471de-2b47-461b-a709-3254f5056853\" (UID: \"095471de-2b47-461b-a709-3254f5056853\") " Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.715709 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/095471de-2b47-461b-a709-3254f5056853-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "095471de-2b47-461b-a709-3254f5056853" (UID: "095471de-2b47-461b-a709-3254f5056853"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.715932 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095471de-2b47-461b-a709-3254f5056853-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.715951 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b15a422-e077-4fab-b811-7f1804f6df2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.715961 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48fks\" (UniqueName: \"kubernetes.io/projected/6b15a422-e077-4fab-b811-7f1804f6df2b-kube-api-access-48fks\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.718456 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095471de-2b47-461b-a709-3254f5056853-kube-api-access-drs46" (OuterVolumeSpecName: "kube-api-access-drs46") pod "095471de-2b47-461b-a709-3254f5056853" (UID: "095471de-2b47-461b-a709-3254f5056853"). InnerVolumeSpecName "kube-api-access-drs46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:18 crc kubenswrapper[4796]: I1202 20:34:18.817312 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drs46\" (UniqueName: \"kubernetes.io/projected/095471de-2b47-461b-a709-3254f5056853-kube-api-access-drs46\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:19 crc kubenswrapper[4796]: I1202 20:34:19.130159 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-fzp22" Dec 02 20:34:19 crc kubenswrapper[4796]: I1202 20:34:19.130204 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-fzp22" event={"ID":"095471de-2b47-461b-a709-3254f5056853","Type":"ContainerDied","Data":"bf9f607714faa9250c1ba1acef57d99a72f88f36f7e94ee8e7cecbf5e52353d9"} Dec 02 20:34:19 crc kubenswrapper[4796]: I1202 20:34:19.130407 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf9f607714faa9250c1ba1acef57d99a72f88f36f7e94ee8e7cecbf5e52353d9" Dec 02 20:34:19 crc kubenswrapper[4796]: I1202 20:34:19.133144 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" event={"ID":"6b15a422-e077-4fab-b811-7f1804f6df2b","Type":"ContainerDied","Data":"9112961fc8c40a42570614721601236fa34dcc01f743601685152ccc91d18109"} Dec 02 20:34:19 crc kubenswrapper[4796]: I1202 20:34:19.133198 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9112961fc8c40a42570614721601236fa34dcc01f743601685152ccc91d18109" Dec 02 20:34:19 crc kubenswrapper[4796]: I1202 20:34:19.133315 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs" Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.906262 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-76g7n"] Dec 02 20:34:20 crc kubenswrapper[4796]: E1202 20:34:20.906974 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095471de-2b47-461b-a709-3254f5056853" containerName="mariadb-database-create" Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.906988 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="095471de-2b47-461b-a709-3254f5056853" containerName="mariadb-database-create" Dec 02 20:34:20 crc kubenswrapper[4796]: E1202 20:34:20.907010 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b15a422-e077-4fab-b811-7f1804f6df2b" containerName="mariadb-account-create-update" Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.907016 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b15a422-e077-4fab-b811-7f1804f6df2b" containerName="mariadb-account-create-update" Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.907242 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="095471de-2b47-461b-a709-3254f5056853" containerName="mariadb-database-create" Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.907278 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b15a422-e077-4fab-b811-7f1804f6df2b" containerName="mariadb-account-create-update" Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.907893 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.910538 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.919553 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-dqvqp" Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.920959 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-76g7n"] Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.961397 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-76g7n\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.961465 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-config-data\") pod \"watcher-kuttl-db-sync-76g7n\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.961495 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-db-sync-config-data\") pod \"watcher-kuttl-db-sync-76g7n\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:20 crc kubenswrapper[4796]: I1202 20:34:20.961611 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgzgr\" (UniqueName: \"kubernetes.io/projected/9adc563a-8133-4030-97e1-52e079b45951-kube-api-access-jgzgr\") pod \"watcher-kuttl-db-sync-76g7n\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:21 crc kubenswrapper[4796]: I1202 20:34:21.063613 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-db-sync-config-data\") pod \"watcher-kuttl-db-sync-76g7n\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:21 crc kubenswrapper[4796]: I1202 20:34:21.063719 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgzgr\" (UniqueName: \"kubernetes.io/projected/9adc563a-8133-4030-97e1-52e079b45951-kube-api-access-jgzgr\") pod \"watcher-kuttl-db-sync-76g7n\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:21 crc kubenswrapper[4796]: I1202 20:34:21.063845 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-76g7n\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:21 crc kubenswrapper[4796]: I1202 20:34:21.063880 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-config-data\") pod \"watcher-kuttl-db-sync-76g7n\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:21 crc kubenswrapper[4796]: I1202 20:34:21.069536 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-76g7n\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:21 crc kubenswrapper[4796]: I1202 20:34:21.069540 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-db-sync-config-data\") pod \"watcher-kuttl-db-sync-76g7n\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:21 crc kubenswrapper[4796]: I1202 20:34:21.072967 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-config-data\") pod \"watcher-kuttl-db-sync-76g7n\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:21 crc kubenswrapper[4796]: I1202 20:34:21.081971 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgzgr\" (UniqueName: \"kubernetes.io/projected/9adc563a-8133-4030-97e1-52e079b45951-kube-api-access-jgzgr\") pod \"watcher-kuttl-db-sync-76g7n\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:21 crc kubenswrapper[4796]: I1202 20:34:21.227781 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:21 crc kubenswrapper[4796]: I1202 20:34:21.703844 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-76g7n"] Dec 02 20:34:22 crc kubenswrapper[4796]: I1202 20:34:22.163071 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" event={"ID":"9adc563a-8133-4030-97e1-52e079b45951","Type":"ContainerStarted","Data":"8488b8338e1ae62ca80549a2127cb67ac6dbf8112311d7051c78acf4f9a7dceb"} Dec 02 20:34:22 crc kubenswrapper[4796]: I1202 20:34:22.163128 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" event={"ID":"9adc563a-8133-4030-97e1-52e079b45951","Type":"ContainerStarted","Data":"1d13ed6264a34809f2faddc35d7b6848b0dfa5542a264ce96672712b88df8574"} Dec 02 20:34:22 crc kubenswrapper[4796]: I1202 20:34:22.184680 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" podStartSLOduration=2.184658092 podStartE2EDuration="2.184658092s" podCreationTimestamp="2025-12-02 20:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:34:22.180797578 +0000 UTC m=+1345.184173132" watchObservedRunningTime="2025-12-02 20:34:22.184658092 +0000 UTC m=+1345.188033616" Dec 02 20:34:25 crc kubenswrapper[4796]: I1202 20:34:25.189495 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:34:25 crc kubenswrapper[4796]: I1202 20:34:25.190279 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:34:25 crc kubenswrapper[4796]: I1202 20:34:25.195454 4796 generic.go:334] "Generic (PLEG): container finished" podID="9adc563a-8133-4030-97e1-52e079b45951" containerID="8488b8338e1ae62ca80549a2127cb67ac6dbf8112311d7051c78acf4f9a7dceb" exitCode=0 Dec 02 20:34:25 crc kubenswrapper[4796]: I1202 20:34:25.195535 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" event={"ID":"9adc563a-8133-4030-97e1-52e079b45951","Type":"ContainerDied","Data":"8488b8338e1ae62ca80549a2127cb67ac6dbf8112311d7051c78acf4f9a7dceb"} Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.640643 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.764266 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-combined-ca-bundle\") pod \"9adc563a-8133-4030-97e1-52e079b45951\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.764412 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-db-sync-config-data\") pod \"9adc563a-8133-4030-97e1-52e079b45951\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.764500 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-config-data\") pod \"9adc563a-8133-4030-97e1-52e079b45951\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.764566 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgzgr\" (UniqueName: \"kubernetes.io/projected/9adc563a-8133-4030-97e1-52e079b45951-kube-api-access-jgzgr\") pod \"9adc563a-8133-4030-97e1-52e079b45951\" (UID: \"9adc563a-8133-4030-97e1-52e079b45951\") " Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.770859 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9adc563a-8133-4030-97e1-52e079b45951-kube-api-access-jgzgr" (OuterVolumeSpecName: "kube-api-access-jgzgr") pod "9adc563a-8133-4030-97e1-52e079b45951" (UID: "9adc563a-8133-4030-97e1-52e079b45951"). InnerVolumeSpecName "kube-api-access-jgzgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.771081 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9adc563a-8133-4030-97e1-52e079b45951" (UID: "9adc563a-8133-4030-97e1-52e079b45951"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.788586 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9adc563a-8133-4030-97e1-52e079b45951" (UID: "9adc563a-8133-4030-97e1-52e079b45951"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.820799 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-config-data" (OuterVolumeSpecName: "config-data") pod "9adc563a-8133-4030-97e1-52e079b45951" (UID: "9adc563a-8133-4030-97e1-52e079b45951"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.866424 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.866484 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgzgr\" (UniqueName: \"kubernetes.io/projected/9adc563a-8133-4030-97e1-52e079b45951-kube-api-access-jgzgr\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.866497 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:26 crc kubenswrapper[4796]: I1202 20:34:26.866506 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9adc563a-8133-4030-97e1-52e079b45951-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.221527 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" event={"ID":"9adc563a-8133-4030-97e1-52e079b45951","Type":"ContainerDied","Data":"1d13ed6264a34809f2faddc35d7b6848b0dfa5542a264ce96672712b88df8574"} Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.221908 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d13ed6264a34809f2faddc35d7b6848b0dfa5542a264ce96672712b88df8574" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.221674 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-76g7n" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.587984 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:34:27 crc kubenswrapper[4796]: E1202 20:34:27.588368 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adc563a-8133-4030-97e1-52e079b45951" containerName="watcher-kuttl-db-sync" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.588383 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adc563a-8133-4030-97e1-52e079b45951" containerName="watcher-kuttl-db-sync" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.588535 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adc563a-8133-4030-97e1-52e079b45951" containerName="watcher-kuttl-db-sync" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.589176 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.592277 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.592496 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-dqvqp" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.611075 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.628491 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.630141 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.634158 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.643524 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.644774 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.649771 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.658113 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.663845 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.780943 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781112 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b564e2ad-1452-419e-9bba-1570f3341bdf-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781152 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781179 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nngmp\" (UniqueName: \"kubernetes.io/projected/8a17ed05-8780-4027-ab72-4008c36a4a47-kube-api-access-nngmp\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781246 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781334 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781376 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781408 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781506 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j2hh\" (UniqueName: \"kubernetes.io/projected/b564e2ad-1452-419e-9bba-1570f3341bdf-kube-api-access-8j2hh\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781534 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781569 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a17ed05-8780-4027-ab72-4008c36a4a47-logs\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781597 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgzzj\" (UniqueName: \"kubernetes.io/projected/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-kube-api-access-jgzzj\") pod \"watcher-kuttl-applier-0\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.781839 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.884373 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgzzj\" (UniqueName: \"kubernetes.io/projected/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-kube-api-access-jgzzj\") pod \"watcher-kuttl-applier-0\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.884497 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.884561 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.884616 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.884655 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b564e2ad-1452-419e-9bba-1570f3341bdf-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.884711 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.884773 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nngmp\" (UniqueName: \"kubernetes.io/projected/8a17ed05-8780-4027-ab72-4008c36a4a47-kube-api-access-nngmp\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.884836 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.884904 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.884955 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.885002 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.885075 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j2hh\" (UniqueName: \"kubernetes.io/projected/b564e2ad-1452-419e-9bba-1570f3341bdf-kube-api-access-8j2hh\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.885110 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.885165 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a17ed05-8780-4027-ab72-4008c36a4a47-logs\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.885756 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b564e2ad-1452-419e-9bba-1570f3341bdf-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.885896 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a17ed05-8780-4027-ab72-4008c36a4a47-logs\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.886222 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.890615 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.893246 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.897376 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.902085 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.902701 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.902773 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.902907 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.903141 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.905511 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgzzj\" (UniqueName: \"kubernetes.io/projected/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-kube-api-access-jgzzj\") pod \"watcher-kuttl-applier-0\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.917923 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nngmp\" (UniqueName: \"kubernetes.io/projected/8a17ed05-8780-4027-ab72-4008c36a4a47-kube-api-access-nngmp\") pod \"watcher-kuttl-api-0\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.921087 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j2hh\" (UniqueName: \"kubernetes.io/projected/b564e2ad-1452-419e-9bba-1570f3341bdf-kube-api-access-8j2hh\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.950931 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:27 crc kubenswrapper[4796]: I1202 20:34:27.965908 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:28 crc kubenswrapper[4796]: I1202 20:34:28.206388 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:28 crc kubenswrapper[4796]: I1202 20:34:28.483219 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:34:28 crc kubenswrapper[4796]: I1202 20:34:28.514874 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:34:28 crc kubenswrapper[4796]: I1202 20:34:28.896844 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:34:28 crc kubenswrapper[4796]: W1202 20:34:28.908231 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb564e2ad_1452_419e_9bba_1570f3341bdf.slice/crio-6e600c21e19a6750d960b3bf2aae63b0568db19f169e910b3f734aa25c31b674 WatchSource:0}: Error finding container 6e600c21e19a6750d960b3bf2aae63b0568db19f169e910b3f734aa25c31b674: Status 404 returned error can't find the container with id 6e600c21e19a6750d960b3bf2aae63b0568db19f169e910b3f734aa25c31b674 Dec 02 20:34:29 crc kubenswrapper[4796]: I1202 20:34:29.248426 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca","Type":"ContainerStarted","Data":"cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b"} Dec 02 20:34:29 crc kubenswrapper[4796]: I1202 20:34:29.248521 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca","Type":"ContainerStarted","Data":"e501a9120002a9839f62ce3808ab4937c50c70cc99eb179be6d6542e2c0aedf5"} Dec 02 20:34:29 crc kubenswrapper[4796]: I1202 20:34:29.259023 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b564e2ad-1452-419e-9bba-1570f3341bdf","Type":"ContainerStarted","Data":"ef8eb74dbde755664cbf83be7571933b2a8f505a3534ecc66002fb8ec311e261"} Dec 02 20:34:29 crc kubenswrapper[4796]: I1202 20:34:29.259086 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b564e2ad-1452-419e-9bba-1570f3341bdf","Type":"ContainerStarted","Data":"6e600c21e19a6750d960b3bf2aae63b0568db19f169e910b3f734aa25c31b674"} Dec 02 20:34:29 crc kubenswrapper[4796]: I1202 20:34:29.291298 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.291240917 podStartE2EDuration="2.291240917s" podCreationTimestamp="2025-12-02 20:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:34:29.273720578 +0000 UTC m=+1352.277096122" watchObservedRunningTime="2025-12-02 20:34:29.291240917 +0000 UTC m=+1352.294616461" Dec 02 20:34:29 crc kubenswrapper[4796]: I1202 20:34:29.301975 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8a17ed05-8780-4027-ab72-4008c36a4a47","Type":"ContainerStarted","Data":"c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6"} Dec 02 20:34:29 crc kubenswrapper[4796]: I1202 20:34:29.302106 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8a17ed05-8780-4027-ab72-4008c36a4a47","Type":"ContainerStarted","Data":"83eecdf182d9682abdfd178b047acacf33c4e3e70234ef1b155440b98605b394"} Dec 02 20:34:29 crc kubenswrapper[4796]: I1202 20:34:29.304800 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.304775157 podStartE2EDuration="2.304775157s" podCreationTimestamp="2025-12-02 20:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:34:29.299418256 +0000 UTC m=+1352.302793820" watchObservedRunningTime="2025-12-02 20:34:29.304775157 +0000 UTC m=+1352.308150691" Dec 02 20:34:30 crc kubenswrapper[4796]: I1202 20:34:30.286039 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8a17ed05-8780-4027-ab72-4008c36a4a47","Type":"ContainerStarted","Data":"7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48"} Dec 02 20:34:30 crc kubenswrapper[4796]: I1202 20:34:30.313310 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=3.313282311 podStartE2EDuration="3.313282311s" podCreationTimestamp="2025-12-02 20:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:34:30.311683752 +0000 UTC m=+1353.315059306" watchObservedRunningTime="2025-12-02 20:34:30.313282311 +0000 UTC m=+1353.316657855" Dec 02 20:34:31 crc kubenswrapper[4796]: I1202 20:34:31.296429 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:32 crc kubenswrapper[4796]: I1202 20:34:32.952868 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:32 crc kubenswrapper[4796]: I1202 20:34:32.967524 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:33 crc kubenswrapper[4796]: I1202 20:34:33.310325 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 20:34:33 crc kubenswrapper[4796]: I1202 20:34:33.813227 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:37 crc kubenswrapper[4796]: I1202 20:34:37.952508 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:37 crc kubenswrapper[4796]: I1202 20:34:37.966678 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:37 crc kubenswrapper[4796]: I1202 20:34:37.967426 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:38 crc kubenswrapper[4796]: I1202 20:34:38.024299 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:38 crc kubenswrapper[4796]: I1202 20:34:38.207587 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:38 crc kubenswrapper[4796]: I1202 20:34:38.248015 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:38 crc kubenswrapper[4796]: I1202 20:34:38.357733 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:38 crc kubenswrapper[4796]: I1202 20:34:38.366862 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:38 crc kubenswrapper[4796]: I1202 20:34:38.396857 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:38 crc kubenswrapper[4796]: I1202 20:34:38.397694 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.078963 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-76g7n"] Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.088501 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-76g7n"] Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.127869 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.196526 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher8aa5-account-delete-zqx42"] Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.200400 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.230403 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.232131 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4jl\" (UniqueName: \"kubernetes.io/projected/f135ea48-5363-4751-abd8-a1bf98a054f0-kube-api-access-mn4jl\") pod \"watcher8aa5-account-delete-zqx42\" (UID: \"f135ea48-5363-4751-abd8-a1bf98a054f0\") " pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.232428 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f135ea48-5363-4751-abd8-a1bf98a054f0-operator-scripts\") pod \"watcher8aa5-account-delete-zqx42\" (UID: \"f135ea48-5363-4751-abd8-a1bf98a054f0\") " pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.260433 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.270404 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher8aa5-account-delete-zqx42"] Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.334193 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn4jl\" (UniqueName: \"kubernetes.io/projected/f135ea48-5363-4751-abd8-a1bf98a054f0-kube-api-access-mn4jl\") pod \"watcher8aa5-account-delete-zqx42\" (UID: \"f135ea48-5363-4751-abd8-a1bf98a054f0\") " pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.334772 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f135ea48-5363-4751-abd8-a1bf98a054f0-operator-scripts\") pod \"watcher8aa5-account-delete-zqx42\" (UID: \"f135ea48-5363-4751-abd8-a1bf98a054f0\") " pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.335849 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f135ea48-5363-4751-abd8-a1bf98a054f0-operator-scripts\") pod \"watcher8aa5-account-delete-zqx42\" (UID: \"f135ea48-5363-4751-abd8-a1bf98a054f0\") " pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.357994 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn4jl\" (UniqueName: \"kubernetes.io/projected/f135ea48-5363-4751-abd8-a1bf98a054f0-kube-api-access-mn4jl\") pod \"watcher8aa5-account-delete-zqx42\" (UID: \"f135ea48-5363-4751-abd8-a1bf98a054f0\") " pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.374569 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="8a17ed05-8780-4027-ab72-4008c36a4a47" containerName="watcher-api" containerID="cri-o://7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48" gracePeriod=30 Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.374542 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="8a17ed05-8780-4027-ab72-4008c36a4a47" containerName="watcher-kuttl-api-log" containerID="cri-o://c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6" gracePeriod=30 Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.375021 4796 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-dqvqp\" not found" Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.375386 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca" containerName="watcher-applier" containerID="cri-o://cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b" gracePeriod=30 Dec 02 20:34:40 crc kubenswrapper[4796]: E1202 20:34:40.538457 4796 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:34:40 crc kubenswrapper[4796]: E1202 20:34:40.538600 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data podName:b564e2ad-1452-419e-9bba-1570f3341bdf nodeName:}" failed. No retries permitted until 2025-12-02 20:34:41.038572417 +0000 UTC m=+1364.041947951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "b564e2ad-1452-419e-9bba-1570f3341bdf") : secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:34:40 crc kubenswrapper[4796]: I1202 20:34:40.567376 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" Dec 02 20:34:41 crc kubenswrapper[4796]: E1202 20:34:41.063194 4796 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:34:41 crc kubenswrapper[4796]: E1202 20:34:41.064119 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data podName:b564e2ad-1452-419e-9bba-1570f3341bdf nodeName:}" failed. No retries permitted until 2025-12-02 20:34:42.064089869 +0000 UTC m=+1365.067465403 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "b564e2ad-1452-419e-9bba-1570f3341bdf") : secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.067878 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.068359 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="ceilometer-central-agent" containerID="cri-o://c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f" gracePeriod=30 Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.068530 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="proxy-httpd" containerID="cri-o://2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6" gracePeriod=30 Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.068585 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="sg-core" containerID="cri-o://260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d" gracePeriod=30 Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.068637 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="ceilometer-notification-agent" containerID="cri-o://11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4" gracePeriod=30 Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.091802 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher8aa5-account-delete-zqx42"] Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.111767 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.276664 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9adc563a-8133-4030-97e1-52e079b45951" path="/var/lib/kubelet/pods/9adc563a-8133-4030-97e1-52e079b45951/volumes" Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.384030 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" event={"ID":"f135ea48-5363-4751-abd8-a1bf98a054f0","Type":"ContainerStarted","Data":"c102c97b657392894ca5c4e4deee047458480c4ebf3f0c34c01d9aefe9181930"} Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.386376 4796 generic.go:334] "Generic (PLEG): container finished" podID="8a17ed05-8780-4027-ab72-4008c36a4a47" containerID="c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6" exitCode=143 Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.386458 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8a17ed05-8780-4027-ab72-4008c36a4a47","Type":"ContainerDied","Data":"c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6"} Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.390437 4796 generic.go:334] "Generic (PLEG): container finished" podID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerID="260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d" exitCode=2 Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.390669 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="b564e2ad-1452-419e-9bba-1570f3341bdf" containerName="watcher-decision-engine" containerID="cri-o://ef8eb74dbde755664cbf83be7571933b2a8f505a3534ecc66002fb8ec311e261" gracePeriod=30 Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.390930 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0773fae9-ae8b-4e87-bec4-a1d605ce9e99","Type":"ContainerDied","Data":"260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d"} Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.739820 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.136:3000/\": dial tcp 10.217.0.136:3000: connect: connection refused" Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.881402 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.980881 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-config-data\") pod \"8a17ed05-8780-4027-ab72-4008c36a4a47\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.980942 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a17ed05-8780-4027-ab72-4008c36a4a47-logs\") pod \"8a17ed05-8780-4027-ab72-4008c36a4a47\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.980979 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nngmp\" (UniqueName: \"kubernetes.io/projected/8a17ed05-8780-4027-ab72-4008c36a4a47-kube-api-access-nngmp\") pod \"8a17ed05-8780-4027-ab72-4008c36a4a47\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.981004 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-custom-prometheus-ca\") pod \"8a17ed05-8780-4027-ab72-4008c36a4a47\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.981067 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-combined-ca-bundle\") pod \"8a17ed05-8780-4027-ab72-4008c36a4a47\" (UID: \"8a17ed05-8780-4027-ab72-4008c36a4a47\") " Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.982553 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a17ed05-8780-4027-ab72-4008c36a4a47-logs" (OuterVolumeSpecName: "logs") pod "8a17ed05-8780-4027-ab72-4008c36a4a47" (UID: "8a17ed05-8780-4027-ab72-4008c36a4a47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:34:41 crc kubenswrapper[4796]: I1202 20:34:41.988392 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a17ed05-8780-4027-ab72-4008c36a4a47-kube-api-access-nngmp" (OuterVolumeSpecName: "kube-api-access-nngmp") pod "8a17ed05-8780-4027-ab72-4008c36a4a47" (UID: "8a17ed05-8780-4027-ab72-4008c36a4a47"). InnerVolumeSpecName "kube-api-access-nngmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.021190 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a17ed05-8780-4027-ab72-4008c36a4a47" (UID: "8a17ed05-8780-4027-ab72-4008c36a4a47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.021341 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8a17ed05-8780-4027-ab72-4008c36a4a47" (UID: "8a17ed05-8780-4027-ab72-4008c36a4a47"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.042810 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-config-data" (OuterVolumeSpecName: "config-data") pod "8a17ed05-8780-4027-ab72-4008c36a4a47" (UID: "8a17ed05-8780-4027-ab72-4008c36a4a47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.083241 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.083325 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a17ed05-8780-4027-ab72-4008c36a4a47-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.083339 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nngmp\" (UniqueName: \"kubernetes.io/projected/8a17ed05-8780-4027-ab72-4008c36a4a47-kube-api-access-nngmp\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.083352 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.083364 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a17ed05-8780-4027-ab72-4008c36a4a47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:42 crc kubenswrapper[4796]: E1202 20:34:42.083406 4796 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:34:42 crc kubenswrapper[4796]: E1202 20:34:42.083516 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data podName:b564e2ad-1452-419e-9bba-1570f3341bdf nodeName:}" failed. No retries permitted until 2025-12-02 20:34:44.083491105 +0000 UTC m=+1367.086866669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "b564e2ad-1452-419e-9bba-1570f3341bdf") : secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.402563 4796 generic.go:334] "Generic (PLEG): container finished" podID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerID="2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6" exitCode=0 Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.403036 4796 generic.go:334] "Generic (PLEG): container finished" podID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerID="c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f" exitCode=0 Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.402634 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0773fae9-ae8b-4e87-bec4-a1d605ce9e99","Type":"ContainerDied","Data":"2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6"} Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.403088 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0773fae9-ae8b-4e87-bec4-a1d605ce9e99","Type":"ContainerDied","Data":"c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f"} Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.406584 4796 generic.go:334] "Generic (PLEG): container finished" podID="f135ea48-5363-4751-abd8-a1bf98a054f0" containerID="c9a43d77cdc62f84c440c20fb089546c4fac659f38121ea1027d82204fc1db07" exitCode=0 Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.406668 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" event={"ID":"f135ea48-5363-4751-abd8-a1bf98a054f0","Type":"ContainerDied","Data":"c9a43d77cdc62f84c440c20fb089546c4fac659f38121ea1027d82204fc1db07"} Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.408778 4796 generic.go:334] "Generic (PLEG): container finished" podID="8a17ed05-8780-4027-ab72-4008c36a4a47" containerID="7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48" exitCode=0 Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.408813 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.408832 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8a17ed05-8780-4027-ab72-4008c36a4a47","Type":"ContainerDied","Data":"7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48"} Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.408889 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8a17ed05-8780-4027-ab72-4008c36a4a47","Type":"ContainerDied","Data":"83eecdf182d9682abdfd178b047acacf33c4e3e70234ef1b155440b98605b394"} Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.408914 4796 scope.go:117] "RemoveContainer" containerID="7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.463575 4796 scope.go:117] "RemoveContainer" containerID="c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.475894 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.490573 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.501543 4796 scope.go:117] "RemoveContainer" containerID="7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48" Dec 02 20:34:42 crc kubenswrapper[4796]: E1202 20:34:42.503682 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48\": container with ID starting with 7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48 not found: ID does not exist" containerID="7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.503727 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48"} err="failed to get container status \"7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48\": rpc error: code = NotFound desc = could not find container \"7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48\": container with ID starting with 7eff69f4d423501b8ecb998737a076e6081f349683f46f642db6338ec69e4e48 not found: ID does not exist" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.503755 4796 scope.go:117] "RemoveContainer" containerID="c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6" Dec 02 20:34:42 crc kubenswrapper[4796]: E1202 20:34:42.504200 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6\": container with ID starting with c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6 not found: ID does not exist" containerID="c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6" Dec 02 20:34:42 crc kubenswrapper[4796]: I1202 20:34:42.504226 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6"} err="failed to get container status \"c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6\": rpc error: code = NotFound desc = could not find container \"c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6\": container with ID starting with c7a15e092f46e000f11431079349ee5b966de277e698e263af281aabbd29e7f6 not found: ID does not exist" Dec 02 20:34:42 crc kubenswrapper[4796]: E1202 20:34:42.969460 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:34:42 crc kubenswrapper[4796]: E1202 20:34:42.970749 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:34:42 crc kubenswrapper[4796]: E1202 20:34:42.971943 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:34:42 crc kubenswrapper[4796]: E1202 20:34:42.971972 4796 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca" containerName="watcher-applier" Dec 02 20:34:43 crc kubenswrapper[4796]: I1202 20:34:43.273529 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a17ed05-8780-4027-ab72-4008c36a4a47" path="/var/lib/kubelet/pods/8a17ed05-8780-4027-ab72-4008c36a4a47/volumes" Dec 02 20:34:43 crc kubenswrapper[4796]: I1202 20:34:43.835634 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" Dec 02 20:34:44 crc kubenswrapper[4796]: I1202 20:34:44.018721 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn4jl\" (UniqueName: \"kubernetes.io/projected/f135ea48-5363-4751-abd8-a1bf98a054f0-kube-api-access-mn4jl\") pod \"f135ea48-5363-4751-abd8-a1bf98a054f0\" (UID: \"f135ea48-5363-4751-abd8-a1bf98a054f0\") " Dec 02 20:34:44 crc kubenswrapper[4796]: I1202 20:34:44.018862 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f135ea48-5363-4751-abd8-a1bf98a054f0-operator-scripts\") pod \"f135ea48-5363-4751-abd8-a1bf98a054f0\" (UID: \"f135ea48-5363-4751-abd8-a1bf98a054f0\") " Dec 02 20:34:44 crc kubenswrapper[4796]: I1202 20:34:44.020204 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f135ea48-5363-4751-abd8-a1bf98a054f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f135ea48-5363-4751-abd8-a1bf98a054f0" (UID: "f135ea48-5363-4751-abd8-a1bf98a054f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:34:44 crc kubenswrapper[4796]: I1202 20:34:44.026596 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f135ea48-5363-4751-abd8-a1bf98a054f0-kube-api-access-mn4jl" (OuterVolumeSpecName: "kube-api-access-mn4jl") pod "f135ea48-5363-4751-abd8-a1bf98a054f0" (UID: "f135ea48-5363-4751-abd8-a1bf98a054f0"). InnerVolumeSpecName "kube-api-access-mn4jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:44 crc kubenswrapper[4796]: I1202 20:34:44.120972 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn4jl\" (UniqueName: \"kubernetes.io/projected/f135ea48-5363-4751-abd8-a1bf98a054f0-kube-api-access-mn4jl\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:44 crc kubenswrapper[4796]: I1202 20:34:44.121009 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f135ea48-5363-4751-abd8-a1bf98a054f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:44 crc kubenswrapper[4796]: E1202 20:34:44.121112 4796 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:34:44 crc kubenswrapper[4796]: E1202 20:34:44.121243 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data podName:b564e2ad-1452-419e-9bba-1570f3341bdf nodeName:}" failed. No retries permitted until 2025-12-02 20:34:48.121185021 +0000 UTC m=+1371.124560585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "b564e2ad-1452-419e-9bba-1570f3341bdf") : secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:34:44 crc kubenswrapper[4796]: I1202 20:34:44.440336 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" event={"ID":"f135ea48-5363-4751-abd8-a1bf98a054f0","Type":"ContainerDied","Data":"c102c97b657392894ca5c4e4deee047458480c4ebf3f0c34c01d9aefe9181930"} Dec 02 20:34:44 crc kubenswrapper[4796]: I1202 20:34:44.440384 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8aa5-account-delete-zqx42" Dec 02 20:34:44 crc kubenswrapper[4796]: I1202 20:34:44.440407 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c102c97b657392894ca5c4e4deee047458480c4ebf3f0c34c01d9aefe9181930" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.239957 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-fzp22"] Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.246770 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-fzp22"] Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.258846 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs"] Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.277339 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="095471de-2b47-461b-a709-3254f5056853" path="/var/lib/kubelet/pods/095471de-2b47-461b-a709-3254f5056853/volumes" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.277970 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher8aa5-account-delete-zqx42"] Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.277995 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-8aa5-account-create-update-h7mxs"] Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.280588 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher8aa5-account-delete-zqx42"] Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.443400 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.448971 4796 generic.go:334] "Generic (PLEG): container finished" podID="1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca" containerID="cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b" exitCode=0 Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.449025 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca","Type":"ContainerDied","Data":"cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b"} Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.449065 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.449090 4796 scope.go:117] "RemoveContainer" containerID="cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.449069 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca","Type":"ContainerDied","Data":"e501a9120002a9839f62ce3808ab4937c50c70cc99eb179be6d6542e2c0aedf5"} Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.484320 4796 scope.go:117] "RemoveContainer" containerID="cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b" Dec 02 20:34:45 crc kubenswrapper[4796]: E1202 20:34:45.484975 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b\": container with ID starting with cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b not found: ID does not exist" containerID="cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.485042 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b"} err="failed to get container status \"cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b\": rpc error: code = NotFound desc = could not find container \"cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b\": container with ID starting with cf281ac8208f1dedcef205991a3c8b403a02cf12fce41b742958d096c5b1b00b not found: ID does not exist" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.556898 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-config-data\") pod \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.557349 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-logs\") pod \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.557549 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-combined-ca-bundle\") pod \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.557722 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-logs" (OuterVolumeSpecName: "logs") pod "1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca" (UID: "1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.558027 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgzzj\" (UniqueName: \"kubernetes.io/projected/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-kube-api-access-jgzzj\") pod \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\" (UID: \"1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca\") " Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.558776 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.580738 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-kube-api-access-jgzzj" (OuterVolumeSpecName: "kube-api-access-jgzzj") pod "1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca" (UID: "1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca"). InnerVolumeSpecName "kube-api-access-jgzzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.586768 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca" (UID: "1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.638460 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-config-data" (OuterVolumeSpecName: "config-data") pod "1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca" (UID: "1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.660647 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgzzj\" (UniqueName: \"kubernetes.io/projected/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-kube-api-access-jgzzj\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.660691 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.660702 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.789725 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:34:45 crc kubenswrapper[4796]: I1202 20:34:45.799621 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.386924 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.467530 4796 generic.go:334] "Generic (PLEG): container finished" podID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerID="11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4" exitCode=0 Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.467620 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0773fae9-ae8b-4e87-bec4-a1d605ce9e99","Type":"ContainerDied","Data":"11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4"} Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.467658 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0773fae9-ae8b-4e87-bec4-a1d605ce9e99","Type":"ContainerDied","Data":"92b0ad087073f09a840e689df55bc2c4719d571e8d9f1423b1bb427f5a5ed07c"} Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.467683 4796 scope.go:117] "RemoveContainer" containerID="2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.467886 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.475706 4796 generic.go:334] "Generic (PLEG): container finished" podID="b564e2ad-1452-419e-9bba-1570f3341bdf" containerID="ef8eb74dbde755664cbf83be7571933b2a8f505a3534ecc66002fb8ec311e261" exitCode=0 Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.475754 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b564e2ad-1452-419e-9bba-1570f3341bdf","Type":"ContainerDied","Data":"ef8eb74dbde755664cbf83be7571933b2a8f505a3534ecc66002fb8ec311e261"} Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.480967 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-log-httpd\") pod \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.481100 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-ceilometer-tls-certs\") pod \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.481127 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-scripts\") pod \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.481158 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrzzx\" (UniqueName: \"kubernetes.io/projected/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-kube-api-access-hrzzx\") pod \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.481324 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-combined-ca-bundle\") pod \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.481387 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-sg-core-conf-yaml\") pod \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.481413 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-run-httpd\") pod \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.481450 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-config-data\") pod \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\" (UID: \"0773fae9-ae8b-4e87-bec4-a1d605ce9e99\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.489454 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0773fae9-ae8b-4e87-bec4-a1d605ce9e99" (UID: "0773fae9-ae8b-4e87-bec4-a1d605ce9e99"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.489822 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0773fae9-ae8b-4e87-bec4-a1d605ce9e99" (UID: "0773fae9-ae8b-4e87-bec4-a1d605ce9e99"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.491555 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-kube-api-access-hrzzx" (OuterVolumeSpecName: "kube-api-access-hrzzx") pod "0773fae9-ae8b-4e87-bec4-a1d605ce9e99" (UID: "0773fae9-ae8b-4e87-bec4-a1d605ce9e99"). InnerVolumeSpecName "kube-api-access-hrzzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.498876 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-scripts" (OuterVolumeSpecName: "scripts") pod "0773fae9-ae8b-4e87-bec4-a1d605ce9e99" (UID: "0773fae9-ae8b-4e87-bec4-a1d605ce9e99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.517079 4796 scope.go:117] "RemoveContainer" containerID="260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.538689 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0773fae9-ae8b-4e87-bec4-a1d605ce9e99" (UID: "0773fae9-ae8b-4e87-bec4-a1d605ce9e99"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.541507 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0773fae9-ae8b-4e87-bec4-a1d605ce9e99" (UID: "0773fae9-ae8b-4e87-bec4-a1d605ce9e99"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.585376 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.585417 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.585427 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrzzx\" (UniqueName: \"kubernetes.io/projected/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-kube-api-access-hrzzx\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.585438 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.585454 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.585464 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.608514 4796 scope.go:117] "RemoveContainer" containerID="11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.617558 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-config-data" (OuterVolumeSpecName: "config-data") pod "0773fae9-ae8b-4e87-bec4-a1d605ce9e99" (UID: "0773fae9-ae8b-4e87-bec4-a1d605ce9e99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.622199 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0773fae9-ae8b-4e87-bec4-a1d605ce9e99" (UID: "0773fae9-ae8b-4e87-bec4-a1d605ce9e99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.641627 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.647864 4796 scope.go:117] "RemoveContainer" containerID="c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.674505 4796 scope.go:117] "RemoveContainer" containerID="2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6" Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.675125 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6\": container with ID starting with 2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6 not found: ID does not exist" containerID="2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.675174 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6"} err="failed to get container status \"2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6\": rpc error: code = NotFound desc = could not find container \"2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6\": container with ID starting with 2dc2f7c7ac351497b23a0d13b5df4ccb403f39436dc8a9e18b819c6d1fe352b6 not found: ID does not exist" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.675199 4796 scope.go:117] "RemoveContainer" containerID="260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d" Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.675575 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d\": container with ID starting with 260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d not found: ID does not exist" containerID="260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.675627 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d"} err="failed to get container status \"260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d\": rpc error: code = NotFound desc = could not find container \"260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d\": container with ID starting with 260aae30ba9f0782443069da17b1868cdd7b6343555943a32e53dc68a9861d9d not found: ID does not exist" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.675661 4796 scope.go:117] "RemoveContainer" containerID="11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4" Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.683641 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4\": container with ID starting with 11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4 not found: ID does not exist" containerID="11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.683680 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4"} err="failed to get container status \"11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4\": rpc error: code = NotFound desc = could not find container \"11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4\": container with ID starting with 11d879d8225c5cfe48244f6aade812aed60b8be7db2138b3d2f9e8b0a93067c4 not found: ID does not exist" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.683702 4796 scope.go:117] "RemoveContainer" containerID="c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f" Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.684074 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f\": container with ID starting with c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f not found: ID does not exist" containerID="c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.684122 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f"} err="failed to get container status \"c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f\": rpc error: code = NotFound desc = could not find container \"c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f\": container with ID starting with c9eae20ef7067069f73faf7c774005a793fcd679a01b0ce9db1fb7f98bca043f not found: ID does not exist" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.687338 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.687362 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0773fae9-ae8b-4e87-bec4-a1d605ce9e99-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.788002 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-custom-prometheus-ca\") pod \"b564e2ad-1452-419e-9bba-1570f3341bdf\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.788365 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j2hh\" (UniqueName: \"kubernetes.io/projected/b564e2ad-1452-419e-9bba-1570f3341bdf-kube-api-access-8j2hh\") pod \"b564e2ad-1452-419e-9bba-1570f3341bdf\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.788425 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b564e2ad-1452-419e-9bba-1570f3341bdf-logs\") pod \"b564e2ad-1452-419e-9bba-1570f3341bdf\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.788465 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-combined-ca-bundle\") pod \"b564e2ad-1452-419e-9bba-1570f3341bdf\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.788519 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data\") pod \"b564e2ad-1452-419e-9bba-1570f3341bdf\" (UID: \"b564e2ad-1452-419e-9bba-1570f3341bdf\") " Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.788924 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b564e2ad-1452-419e-9bba-1570f3341bdf-logs" (OuterVolumeSpecName: "logs") pod "b564e2ad-1452-419e-9bba-1570f3341bdf" (UID: "b564e2ad-1452-419e-9bba-1570f3341bdf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.793596 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b564e2ad-1452-419e-9bba-1570f3341bdf-kube-api-access-8j2hh" (OuterVolumeSpecName: "kube-api-access-8j2hh") pod "b564e2ad-1452-419e-9bba-1570f3341bdf" (UID: "b564e2ad-1452-419e-9bba-1570f3341bdf"). InnerVolumeSpecName "kube-api-access-8j2hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.815418 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.821336 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b564e2ad-1452-419e-9bba-1570f3341bdf" (UID: "b564e2ad-1452-419e-9bba-1570f3341bdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.831362 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b564e2ad-1452-419e-9bba-1570f3341bdf" (UID: "b564e2ad-1452-419e-9bba-1570f3341bdf"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.831459 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844074 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.844569 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b564e2ad-1452-419e-9bba-1570f3341bdf" containerName="watcher-decision-engine" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844599 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b564e2ad-1452-419e-9bba-1570f3341bdf" containerName="watcher-decision-engine" Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.844609 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="ceilometer-central-agent" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844617 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="ceilometer-central-agent" Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.844628 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="ceilometer-notification-agent" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844634 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="ceilometer-notification-agent" Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.844642 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a17ed05-8780-4027-ab72-4008c36a4a47" containerName="watcher-kuttl-api-log" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844649 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a17ed05-8780-4027-ab72-4008c36a4a47" containerName="watcher-kuttl-api-log" Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.844668 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="sg-core" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844675 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="sg-core" Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.844686 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca" containerName="watcher-applier" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844694 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca" containerName="watcher-applier" Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.844707 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="proxy-httpd" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844715 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="proxy-httpd" Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.844728 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f135ea48-5363-4751-abd8-a1bf98a054f0" containerName="mariadb-account-delete" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844736 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f135ea48-5363-4751-abd8-a1bf98a054f0" containerName="mariadb-account-delete" Dec 02 20:34:46 crc kubenswrapper[4796]: E1202 20:34:46.844747 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a17ed05-8780-4027-ab72-4008c36a4a47" containerName="watcher-api" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844754 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a17ed05-8780-4027-ab72-4008c36a4a47" containerName="watcher-api" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844927 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b564e2ad-1452-419e-9bba-1570f3341bdf" containerName="watcher-decision-engine" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844940 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="ceilometer-notification-agent" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844953 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a17ed05-8780-4027-ab72-4008c36a4a47" containerName="watcher-kuttl-api-log" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844963 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f135ea48-5363-4751-abd8-a1bf98a054f0" containerName="mariadb-account-delete" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844971 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a17ed05-8780-4027-ab72-4008c36a4a47" containerName="watcher-api" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844983 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="proxy-httpd" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844990 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca" containerName="watcher-applier" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.844998 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="sg-core" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.845005 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" containerName="ceilometer-central-agent" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.846583 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.852937 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.855331 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.855757 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.855996 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.886444 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data" (OuterVolumeSpecName: "config-data") pod "b564e2ad-1452-419e-9bba-1570f3341bdf" (UID: "b564e2ad-1452-419e-9bba-1570f3341bdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.891191 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b564e2ad-1452-419e-9bba-1570f3341bdf-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.891222 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.891233 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.891242 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b564e2ad-1452-419e-9bba-1570f3341bdf-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.891272 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j2hh\" (UniqueName: \"kubernetes.io/projected/b564e2ad-1452-419e-9bba-1570f3341bdf-kube-api-access-8j2hh\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.993729 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f4fa31b-730e-4c93-b54f-558da1518581-log-httpd\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.994184 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.994407 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.994573 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-config-data\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.994748 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-scripts\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.994907 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49cq2\" (UniqueName: \"kubernetes.io/projected/3f4fa31b-730e-4c93-b54f-558da1518581-kube-api-access-49cq2\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.995024 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:46 crc kubenswrapper[4796]: I1202 20:34:46.995146 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f4fa31b-730e-4c93-b54f-558da1518581-run-httpd\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.095986 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-config-data\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.096067 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-scripts\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.096129 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49cq2\" (UniqueName: \"kubernetes.io/projected/3f4fa31b-730e-4c93-b54f-558da1518581-kube-api-access-49cq2\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.096168 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.096212 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f4fa31b-730e-4c93-b54f-558da1518581-run-httpd\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.096331 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f4fa31b-730e-4c93-b54f-558da1518581-log-httpd\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.096363 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.096440 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.097116 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f4fa31b-730e-4c93-b54f-558da1518581-log-httpd\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.097340 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f4fa31b-730e-4c93-b54f-558da1518581-run-httpd\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.102138 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.103851 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-config-data\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.104626 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.106082 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.108649 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-scripts\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.122701 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49cq2\" (UniqueName: \"kubernetes.io/projected/3f4fa31b-730e-4c93-b54f-558da1518581-kube-api-access-49cq2\") pod \"ceilometer-0\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.172211 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.279791 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0773fae9-ae8b-4e87-bec4-a1d605ce9e99" path="/var/lib/kubelet/pods/0773fae9-ae8b-4e87-bec4-a1d605ce9e99/volumes" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.280681 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca" path="/var/lib/kubelet/pods/1db5a8de-cc4c-47ad-b5e0-4f474ceed6ca/volumes" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.281186 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b15a422-e077-4fab-b811-7f1804f6df2b" path="/var/lib/kubelet/pods/6b15a422-e077-4fab-b811-7f1804f6df2b/volumes" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.282205 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f135ea48-5363-4751-abd8-a1bf98a054f0" path="/var/lib/kubelet/pods/f135ea48-5363-4751-abd8-a1bf98a054f0/volumes" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.504941 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b564e2ad-1452-419e-9bba-1570f3341bdf","Type":"ContainerDied","Data":"6e600c21e19a6750d960b3bf2aae63b0568db19f169e910b3f734aa25c31b674"} Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.506123 4796 scope.go:117] "RemoveContainer" containerID="ef8eb74dbde755664cbf83be7571933b2a8f505a3534ecc66002fb8ec311e261" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.504951 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.535186 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.541050 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:34:47 crc kubenswrapper[4796]: I1202 20:34:47.697527 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:34:48 crc kubenswrapper[4796]: I1202 20:34:48.539377 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3f4fa31b-730e-4c93-b54f-558da1518581","Type":"ContainerStarted","Data":"7bc9ea5125ed23bd927550c864cbf5ea377c59c9c4487beb10998ff809e678a3"} Dec 02 20:34:49 crc kubenswrapper[4796]: I1202 20:34:49.299209 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b564e2ad-1452-419e-9bba-1570f3341bdf" path="/var/lib/kubelet/pods/b564e2ad-1452-419e-9bba-1570f3341bdf/volumes" Dec 02 20:34:49 crc kubenswrapper[4796]: I1202 20:34:49.551145 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3f4fa31b-730e-4c93-b54f-558da1518581","Type":"ContainerStarted","Data":"0942b1d20d7999e7c0d9359ad174f9aae6a26be02288ee0661b7b9626c06eb9d"} Dec 02 20:34:49 crc kubenswrapper[4796]: I1202 20:34:49.551756 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3f4fa31b-730e-4c93-b54f-558da1518581","Type":"ContainerStarted","Data":"673d32932b6940ecd3881ed24e418b5938ec62153edc554d7fba020018af2618"} Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.187972 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t5s4k"] Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.189792 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.199409 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5s4k"] Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.358742 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/147cf58f-968f-47fd-9f5d-998c7c48b8f6-utilities\") pod \"redhat-operators-t5s4k\" (UID: \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\") " pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.358809 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzlmq\" (UniqueName: \"kubernetes.io/projected/147cf58f-968f-47fd-9f5d-998c7c48b8f6-kube-api-access-rzlmq\") pod \"redhat-operators-t5s4k\" (UID: \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\") " pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.358852 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/147cf58f-968f-47fd-9f5d-998c7c48b8f6-catalog-content\") pod \"redhat-operators-t5s4k\" (UID: \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\") " pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.460714 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/147cf58f-968f-47fd-9f5d-998c7c48b8f6-utilities\") pod \"redhat-operators-t5s4k\" (UID: \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\") " pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.460769 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzlmq\" (UniqueName: \"kubernetes.io/projected/147cf58f-968f-47fd-9f5d-998c7c48b8f6-kube-api-access-rzlmq\") pod \"redhat-operators-t5s4k\" (UID: \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\") " pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.460800 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/147cf58f-968f-47fd-9f5d-998c7c48b8f6-catalog-content\") pod \"redhat-operators-t5s4k\" (UID: \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\") " pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.461238 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/147cf58f-968f-47fd-9f5d-998c7c48b8f6-catalog-content\") pod \"redhat-operators-t5s4k\" (UID: \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\") " pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.461345 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/147cf58f-968f-47fd-9f5d-998c7c48b8f6-utilities\") pod \"redhat-operators-t5s4k\" (UID: \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\") " pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.502469 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzlmq\" (UniqueName: \"kubernetes.io/projected/147cf58f-968f-47fd-9f5d-998c7c48b8f6-kube-api-access-rzlmq\") pod \"redhat-operators-t5s4k\" (UID: \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\") " pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.529721 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:34:50 crc kubenswrapper[4796]: I1202 20:34:50.570788 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3f4fa31b-730e-4c93-b54f-558da1518581","Type":"ContainerStarted","Data":"38f1bd216e6c011205e74359631a3b6cdc6038ce0595aaac314a1918280d59cf"} Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.011043 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5s4k"] Dec 02 20:34:51 crc kubenswrapper[4796]: W1202 20:34:51.020024 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod147cf58f_968f_47fd_9f5d_998c7c48b8f6.slice/crio-23cee656a7dac0e2d76e79d2716227f316ff9b6c14fadcd83830571928400149 WatchSource:0}: Error finding container 23cee656a7dac0e2d76e79d2716227f316ff9b6c14fadcd83830571928400149: Status 404 returned error can't find the container with id 23cee656a7dac0e2d76e79d2716227f316ff9b6c14fadcd83830571928400149 Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.595674 4796 generic.go:334] "Generic (PLEG): container finished" podID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" containerID="102bc3af96b38b4e7c6a60957df092a1bcbfc02a047e8ec5f8ada4947572155a" exitCode=0 Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.595789 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5s4k" event={"ID":"147cf58f-968f-47fd-9f5d-998c7c48b8f6","Type":"ContainerDied","Data":"102bc3af96b38b4e7c6a60957df092a1bcbfc02a047e8ec5f8ada4947572155a"} Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.596025 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5s4k" event={"ID":"147cf58f-968f-47fd-9f5d-998c7c48b8f6","Type":"ContainerStarted","Data":"23cee656a7dac0e2d76e79d2716227f316ff9b6c14fadcd83830571928400149"} Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.653018 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-fhjl7"] Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.654332 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-fhjl7" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.665104 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-2f14-account-create-update-svtzs"] Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.666310 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.669667 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.678724 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-fhjl7"] Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.708101 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-2f14-account-create-update-svtzs"] Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.798315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b92hz\" (UniqueName: \"kubernetes.io/projected/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d-kube-api-access-b92hz\") pod \"watcher-2f14-account-create-update-svtzs\" (UID: \"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d\") " pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.798370 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d-operator-scripts\") pod \"watcher-2f14-account-create-update-svtzs\" (UID: \"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d\") " pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.798430 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5365ca9d-04f4-41cf-b834-8d1f753d1ab4-operator-scripts\") pod \"watcher-db-create-fhjl7\" (UID: \"5365ca9d-04f4-41cf-b834-8d1f753d1ab4\") " pod="watcher-kuttl-default/watcher-db-create-fhjl7" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.798467 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgtdb\" (UniqueName: \"kubernetes.io/projected/5365ca9d-04f4-41cf-b834-8d1f753d1ab4-kube-api-access-xgtdb\") pod \"watcher-db-create-fhjl7\" (UID: \"5365ca9d-04f4-41cf-b834-8d1f753d1ab4\") " pod="watcher-kuttl-default/watcher-db-create-fhjl7" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.900413 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5365ca9d-04f4-41cf-b834-8d1f753d1ab4-operator-scripts\") pod \"watcher-db-create-fhjl7\" (UID: \"5365ca9d-04f4-41cf-b834-8d1f753d1ab4\") " pod="watcher-kuttl-default/watcher-db-create-fhjl7" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.900493 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgtdb\" (UniqueName: \"kubernetes.io/projected/5365ca9d-04f4-41cf-b834-8d1f753d1ab4-kube-api-access-xgtdb\") pod \"watcher-db-create-fhjl7\" (UID: \"5365ca9d-04f4-41cf-b834-8d1f753d1ab4\") " pod="watcher-kuttl-default/watcher-db-create-fhjl7" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.900605 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b92hz\" (UniqueName: \"kubernetes.io/projected/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d-kube-api-access-b92hz\") pod \"watcher-2f14-account-create-update-svtzs\" (UID: \"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d\") " pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.900639 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d-operator-scripts\") pod \"watcher-2f14-account-create-update-svtzs\" (UID: \"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d\") " pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.901830 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5365ca9d-04f4-41cf-b834-8d1f753d1ab4-operator-scripts\") pod \"watcher-db-create-fhjl7\" (UID: \"5365ca9d-04f4-41cf-b834-8d1f753d1ab4\") " pod="watcher-kuttl-default/watcher-db-create-fhjl7" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.902108 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d-operator-scripts\") pod \"watcher-2f14-account-create-update-svtzs\" (UID: \"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d\") " pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.920122 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgtdb\" (UniqueName: \"kubernetes.io/projected/5365ca9d-04f4-41cf-b834-8d1f753d1ab4-kube-api-access-xgtdb\") pod \"watcher-db-create-fhjl7\" (UID: \"5365ca9d-04f4-41cf-b834-8d1f753d1ab4\") " pod="watcher-kuttl-default/watcher-db-create-fhjl7" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.942569 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b92hz\" (UniqueName: \"kubernetes.io/projected/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d-kube-api-access-b92hz\") pod \"watcher-2f14-account-create-update-svtzs\" (UID: \"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d\") " pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.970272 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-fhjl7" Dec 02 20:34:51 crc kubenswrapper[4796]: I1202 20:34:51.984707 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" Dec 02 20:34:52 crc kubenswrapper[4796]: I1202 20:34:52.598859 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-2f14-account-create-update-svtzs"] Dec 02 20:34:52 crc kubenswrapper[4796]: I1202 20:34:52.606745 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5s4k" event={"ID":"147cf58f-968f-47fd-9f5d-998c7c48b8f6","Type":"ContainerStarted","Data":"069a5649a34836b5c5df5dfa4bcc01350f21c190191fcb443d8f3f349dd7f051"} Dec 02 20:34:52 crc kubenswrapper[4796]: W1202 20:34:52.612796 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8b5866d_0d4b_4954_8ebb_3e1e339bf32d.slice/crio-b477cd8882dca186e9f211abdb6ca20a72e49dea63240566fdaf0bdc8ddf5b77 WatchSource:0}: Error finding container b477cd8882dca186e9f211abdb6ca20a72e49dea63240566fdaf0bdc8ddf5b77: Status 404 returned error can't find the container with id b477cd8882dca186e9f211abdb6ca20a72e49dea63240566fdaf0bdc8ddf5b77 Dec 02 20:34:52 crc kubenswrapper[4796]: I1202 20:34:52.614851 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3f4fa31b-730e-4c93-b54f-558da1518581","Type":"ContainerStarted","Data":"a69ff185a3f62ee1c461bd16f59804a28090e1b92e2155d020eab543fb8b41be"} Dec 02 20:34:52 crc kubenswrapper[4796]: I1202 20:34:52.615361 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:34:52 crc kubenswrapper[4796]: I1202 20:34:52.653195 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-fhjl7"] Dec 02 20:34:52 crc kubenswrapper[4796]: I1202 20:34:52.673010 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.87683894 podStartE2EDuration="6.672985686s" podCreationTimestamp="2025-12-02 20:34:46 +0000 UTC" firstStartedPulling="2025-12-02 20:34:47.703275322 +0000 UTC m=+1370.706650866" lastFinishedPulling="2025-12-02 20:34:51.499422078 +0000 UTC m=+1374.502797612" observedRunningTime="2025-12-02 20:34:52.66907347 +0000 UTC m=+1375.672449004" watchObservedRunningTime="2025-12-02 20:34:52.672985686 +0000 UTC m=+1375.676361220" Dec 02 20:34:53 crc kubenswrapper[4796]: I1202 20:34:53.629461 4796 generic.go:334] "Generic (PLEG): container finished" podID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" containerID="069a5649a34836b5c5df5dfa4bcc01350f21c190191fcb443d8f3f349dd7f051" exitCode=0 Dec 02 20:34:53 crc kubenswrapper[4796]: I1202 20:34:53.629653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5s4k" event={"ID":"147cf58f-968f-47fd-9f5d-998c7c48b8f6","Type":"ContainerDied","Data":"069a5649a34836b5c5df5dfa4bcc01350f21c190191fcb443d8f3f349dd7f051"} Dec 02 20:34:53 crc kubenswrapper[4796]: I1202 20:34:53.641312 4796 generic.go:334] "Generic (PLEG): container finished" podID="5365ca9d-04f4-41cf-b834-8d1f753d1ab4" containerID="adf5155b5cc0b7f15b45bddc347d7e83448039f8bd75f3fb88fc89bdb7797520" exitCode=0 Dec 02 20:34:53 crc kubenswrapper[4796]: I1202 20:34:53.641548 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-fhjl7" event={"ID":"5365ca9d-04f4-41cf-b834-8d1f753d1ab4","Type":"ContainerDied","Data":"adf5155b5cc0b7f15b45bddc347d7e83448039f8bd75f3fb88fc89bdb7797520"} Dec 02 20:34:53 crc kubenswrapper[4796]: I1202 20:34:53.641715 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-fhjl7" event={"ID":"5365ca9d-04f4-41cf-b834-8d1f753d1ab4","Type":"ContainerStarted","Data":"b481f7d690b8b486a056ffd7d11139e4e83abf584761ec6b001d40e87797c377"} Dec 02 20:34:53 crc kubenswrapper[4796]: I1202 20:34:53.644746 4796 generic.go:334] "Generic (PLEG): container finished" podID="a8b5866d-0d4b-4954-8ebb-3e1e339bf32d" containerID="841b4e32921827f907e6e32b11aae05394731581e2474da032bd405d4587e919" exitCode=0 Dec 02 20:34:53 crc kubenswrapper[4796]: I1202 20:34:53.645874 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" event={"ID":"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d","Type":"ContainerDied","Data":"841b4e32921827f907e6e32b11aae05394731581e2474da032bd405d4587e919"} Dec 02 20:34:53 crc kubenswrapper[4796]: I1202 20:34:53.645954 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" event={"ID":"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d","Type":"ContainerStarted","Data":"b477cd8882dca186e9f211abdb6ca20a72e49dea63240566fdaf0bdc8ddf5b77"} Dec 02 20:34:54 crc kubenswrapper[4796]: I1202 20:34:54.660182 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5s4k" event={"ID":"147cf58f-968f-47fd-9f5d-998c7c48b8f6","Type":"ContainerStarted","Data":"6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf"} Dec 02 20:34:54 crc kubenswrapper[4796]: I1202 20:34:54.686494 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t5s4k" podStartSLOduration=1.941744492 podStartE2EDuration="4.686459224s" podCreationTimestamp="2025-12-02 20:34:50 +0000 UTC" firstStartedPulling="2025-12-02 20:34:51.598339338 +0000 UTC m=+1374.601714862" lastFinishedPulling="2025-12-02 20:34:54.34305405 +0000 UTC m=+1377.346429594" observedRunningTime="2025-12-02 20:34:54.68468413 +0000 UTC m=+1377.688059664" watchObservedRunningTime="2025-12-02 20:34:54.686459224 +0000 UTC m=+1377.689834788" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.171351 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-fhjl7" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.189240 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.189376 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.189454 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.189667 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.190684 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"479b03b0d22f42a532c48eb369a41ad10eb068f11b5f4600bf6355106af1f04c"} pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.190797 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" containerID="cri-o://479b03b0d22f42a532c48eb369a41ad10eb068f11b5f4600bf6355106af1f04c" gracePeriod=600 Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.367028 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5365ca9d-04f4-41cf-b834-8d1f753d1ab4-operator-scripts\") pod \"5365ca9d-04f4-41cf-b834-8d1f753d1ab4\" (UID: \"5365ca9d-04f4-41cf-b834-8d1f753d1ab4\") " Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.367553 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d-operator-scripts\") pod \"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d\" (UID: \"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d\") " Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.367666 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgtdb\" (UniqueName: \"kubernetes.io/projected/5365ca9d-04f4-41cf-b834-8d1f753d1ab4-kube-api-access-xgtdb\") pod \"5365ca9d-04f4-41cf-b834-8d1f753d1ab4\" (UID: \"5365ca9d-04f4-41cf-b834-8d1f753d1ab4\") " Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.367785 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b92hz\" (UniqueName: \"kubernetes.io/projected/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d-kube-api-access-b92hz\") pod \"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d\" (UID: \"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d\") " Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.368162 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5365ca9d-04f4-41cf-b834-8d1f753d1ab4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5365ca9d-04f4-41cf-b834-8d1f753d1ab4" (UID: "5365ca9d-04f4-41cf-b834-8d1f753d1ab4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.368350 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5365ca9d-04f4-41cf-b834-8d1f753d1ab4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.368448 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8b5866d-0d4b-4954-8ebb-3e1e339bf32d" (UID: "a8b5866d-0d4b-4954-8ebb-3e1e339bf32d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.374442 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5365ca9d-04f4-41cf-b834-8d1f753d1ab4-kube-api-access-xgtdb" (OuterVolumeSpecName: "kube-api-access-xgtdb") pod "5365ca9d-04f4-41cf-b834-8d1f753d1ab4" (UID: "5365ca9d-04f4-41cf-b834-8d1f753d1ab4"). InnerVolumeSpecName "kube-api-access-xgtdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.376465 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d-kube-api-access-b92hz" (OuterVolumeSpecName: "kube-api-access-b92hz") pod "a8b5866d-0d4b-4954-8ebb-3e1e339bf32d" (UID: "a8b5866d-0d4b-4954-8ebb-3e1e339bf32d"). InnerVolumeSpecName "kube-api-access-b92hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.470795 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b92hz\" (UniqueName: \"kubernetes.io/projected/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d-kube-api-access-b92hz\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.471741 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.471784 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgtdb\" (UniqueName: \"kubernetes.io/projected/5365ca9d-04f4-41cf-b834-8d1f753d1ab4-kube-api-access-xgtdb\") on node \"crc\" DevicePath \"\"" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.695618 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-fhjl7" event={"ID":"5365ca9d-04f4-41cf-b834-8d1f753d1ab4","Type":"ContainerDied","Data":"b481f7d690b8b486a056ffd7d11139e4e83abf584761ec6b001d40e87797c377"} Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.695694 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b481f7d690b8b486a056ffd7d11139e4e83abf584761ec6b001d40e87797c377" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.695644 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-fhjl7" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.697851 4796 generic.go:334] "Generic (PLEG): container finished" podID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerID="479b03b0d22f42a532c48eb369a41ad10eb068f11b5f4600bf6355106af1f04c" exitCode=0 Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.697912 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerDied","Data":"479b03b0d22f42a532c48eb369a41ad10eb068f11b5f4600bf6355106af1f04c"} Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.697950 4796 scope.go:117] "RemoveContainer" containerID="81e0968c57ec6d9b11845db69e201783fcaa5e46b23de5768e000d45505c4ab7" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.700740 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.701350 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2f14-account-create-update-svtzs" event={"ID":"a8b5866d-0d4b-4954-8ebb-3e1e339bf32d","Type":"ContainerDied","Data":"b477cd8882dca186e9f211abdb6ca20a72e49dea63240566fdaf0bdc8ddf5b77"} Dec 02 20:34:55 crc kubenswrapper[4796]: I1202 20:34:55.701450 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b477cd8882dca186e9f211abdb6ca20a72e49dea63240566fdaf0bdc8ddf5b77" Dec 02 20:34:56 crc kubenswrapper[4796]: I1202 20:34:56.711722 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerStarted","Data":"a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1"} Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.094197 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7jml5"] Dec 02 20:34:57 crc kubenswrapper[4796]: E1202 20:34:57.094645 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5365ca9d-04f4-41cf-b834-8d1f753d1ab4" containerName="mariadb-database-create" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.094669 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5365ca9d-04f4-41cf-b834-8d1f753d1ab4" containerName="mariadb-database-create" Dec 02 20:34:57 crc kubenswrapper[4796]: E1202 20:34:57.094695 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b5866d-0d4b-4954-8ebb-3e1e339bf32d" containerName="mariadb-account-create-update" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.094703 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b5866d-0d4b-4954-8ebb-3e1e339bf32d" containerName="mariadb-account-create-update" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.094944 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b5866d-0d4b-4954-8ebb-3e1e339bf32d" containerName="mariadb-account-create-update" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.094968 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5365ca9d-04f4-41cf-b834-8d1f753d1ab4" containerName="mariadb-database-create" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.095655 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.100961 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-d5gx7" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.104773 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.109861 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7jml5"] Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.210500 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-db-sync-config-data\") pod \"watcher-kuttl-db-sync-7jml5\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.210643 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-7jml5\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.210868 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lzcj\" (UniqueName: \"kubernetes.io/projected/501c59b2-9c90-4fde-bc60-d15ab7b63cff-kube-api-access-7lzcj\") pod \"watcher-kuttl-db-sync-7jml5\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.211029 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-config-data\") pod \"watcher-kuttl-db-sync-7jml5\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.312512 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lzcj\" (UniqueName: \"kubernetes.io/projected/501c59b2-9c90-4fde-bc60-d15ab7b63cff-kube-api-access-7lzcj\") pod \"watcher-kuttl-db-sync-7jml5\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.312641 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-config-data\") pod \"watcher-kuttl-db-sync-7jml5\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.312720 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-db-sync-config-data\") pod \"watcher-kuttl-db-sync-7jml5\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.312764 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-7jml5\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.316639 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.332668 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-db-sync-config-data\") pod \"watcher-kuttl-db-sync-7jml5\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.332810 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-7jml5\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.342779 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-config-data\") pod \"watcher-kuttl-db-sync-7jml5\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.355584 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lzcj\" (UniqueName: \"kubernetes.io/projected/501c59b2-9c90-4fde-bc60-d15ab7b63cff-kube-api-access-7lzcj\") pod \"watcher-kuttl-db-sync-7jml5\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.426444 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-d5gx7" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.434849 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:34:57 crc kubenswrapper[4796]: I1202 20:34:57.941048 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7jml5"] Dec 02 20:34:58 crc kubenswrapper[4796]: I1202 20:34:58.729662 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" event={"ID":"501c59b2-9c90-4fde-bc60-d15ab7b63cff","Type":"ContainerStarted","Data":"ca142f76575b505b3d674f607c31a1710c67f95ec4c133323ad2fe3c1a1f7b05"} Dec 02 20:34:58 crc kubenswrapper[4796]: I1202 20:34:58.730199 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" event={"ID":"501c59b2-9c90-4fde-bc60-d15ab7b63cff","Type":"ContainerStarted","Data":"edd642e6ff5e8f27995a0e87e926041a7f2c4ab14861786373f352f97fe5432b"} Dec 02 20:34:58 crc kubenswrapper[4796]: I1202 20:34:58.795102 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" podStartSLOduration=1.795062181 podStartE2EDuration="1.795062181s" podCreationTimestamp="2025-12-02 20:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:34:58.757586731 +0000 UTC m=+1381.760962265" watchObservedRunningTime="2025-12-02 20:34:58.795062181 +0000 UTC m=+1381.798437715" Dec 02 20:35:00 crc kubenswrapper[4796]: I1202 20:35:00.530225 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:35:00 crc kubenswrapper[4796]: I1202 20:35:00.530578 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:35:01 crc kubenswrapper[4796]: E1202 20:35:01.382639 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod501c59b2_9c90_4fde_bc60_d15ab7b63cff.slice/crio-conmon-ca142f76575b505b3d674f607c31a1710c67f95ec4c133323ad2fe3c1a1f7b05.scope\": RecentStats: unable to find data in memory cache]" Dec 02 20:35:01 crc kubenswrapper[4796]: I1202 20:35:01.615717 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t5s4k" podUID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" containerName="registry-server" probeResult="failure" output=< Dec 02 20:35:01 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 02 20:35:01 crc kubenswrapper[4796]: > Dec 02 20:35:01 crc kubenswrapper[4796]: I1202 20:35:01.756878 4796 generic.go:334] "Generic (PLEG): container finished" podID="501c59b2-9c90-4fde-bc60-d15ab7b63cff" containerID="ca142f76575b505b3d674f607c31a1710c67f95ec4c133323ad2fe3c1a1f7b05" exitCode=0 Dec 02 20:35:01 crc kubenswrapper[4796]: I1202 20:35:01.756935 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" event={"ID":"501c59b2-9c90-4fde-bc60-d15ab7b63cff","Type":"ContainerDied","Data":"ca142f76575b505b3d674f607c31a1710c67f95ec4c133323ad2fe3c1a1f7b05"} Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.286882 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.424304 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-config-data\") pod \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.424390 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lzcj\" (UniqueName: \"kubernetes.io/projected/501c59b2-9c90-4fde-bc60-d15ab7b63cff-kube-api-access-7lzcj\") pod \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.424512 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-db-sync-config-data\") pod \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.424566 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-combined-ca-bundle\") pod \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\" (UID: \"501c59b2-9c90-4fde-bc60-d15ab7b63cff\") " Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.430908 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/501c59b2-9c90-4fde-bc60-d15ab7b63cff-kube-api-access-7lzcj" (OuterVolumeSpecName: "kube-api-access-7lzcj") pod "501c59b2-9c90-4fde-bc60-d15ab7b63cff" (UID: "501c59b2-9c90-4fde-bc60-d15ab7b63cff"). InnerVolumeSpecName "kube-api-access-7lzcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.432353 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "501c59b2-9c90-4fde-bc60-d15ab7b63cff" (UID: "501c59b2-9c90-4fde-bc60-d15ab7b63cff"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.451353 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "501c59b2-9c90-4fde-bc60-d15ab7b63cff" (UID: "501c59b2-9c90-4fde-bc60-d15ab7b63cff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.478945 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-config-data" (OuterVolumeSpecName: "config-data") pod "501c59b2-9c90-4fde-bc60-d15ab7b63cff" (UID: "501c59b2-9c90-4fde-bc60-d15ab7b63cff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.526911 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.526963 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.526982 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lzcj\" (UniqueName: \"kubernetes.io/projected/501c59b2-9c90-4fde-bc60-d15ab7b63cff-kube-api-access-7lzcj\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.527004 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/501c59b2-9c90-4fde-bc60-d15ab7b63cff-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.798951 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" event={"ID":"501c59b2-9c90-4fde-bc60-d15ab7b63cff","Type":"ContainerDied","Data":"edd642e6ff5e8f27995a0e87e926041a7f2c4ab14861786373f352f97fe5432b"} Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.799040 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edd642e6ff5e8f27995a0e87e926041a7f2c4ab14861786373f352f97fe5432b" Dec 02 20:35:03 crc kubenswrapper[4796]: I1202 20:35:03.799375 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7jml5" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.079042 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:35:04 crc kubenswrapper[4796]: E1202 20:35:04.079878 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501c59b2-9c90-4fde-bc60-d15ab7b63cff" containerName="watcher-kuttl-db-sync" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.079903 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="501c59b2-9c90-4fde-bc60-d15ab7b63cff" containerName="watcher-kuttl-db-sync" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.080121 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="501c59b2-9c90-4fde-bc60-d15ab7b63cff" containerName="watcher-kuttl-db-sync" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.081145 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.083013 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.089115 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-d5gx7" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.092359 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.131023 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.132631 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.134950 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.135123 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.135202 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.141712 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.143281 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.145948 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.151972 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.224986 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238147 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238212 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx4sn\" (UniqueName: \"kubernetes.io/projected/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-kube-api-access-kx4sn\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238244 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238298 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238321 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238354 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2f7\" (UniqueName: \"kubernetes.io/projected/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-kube-api-access-lx2f7\") pod \"watcher-kuttl-applier-0\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238371 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238401 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238421 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238446 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238469 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsmd9\" (UniqueName: \"kubernetes.io/projected/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-kube-api-access-fsmd9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238527 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238565 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238627 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238652 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.238679 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.340831 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.340898 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.340927 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.340978 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.341010 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx4sn\" (UniqueName: \"kubernetes.io/projected/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-kube-api-access-kx4sn\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.341034 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.341067 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.341089 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.341121 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2f7\" (UniqueName: \"kubernetes.io/projected/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-kube-api-access-lx2f7\") pod \"watcher-kuttl-applier-0\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.341142 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.341174 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.341197 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.341225 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.341249 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsmd9\" (UniqueName: \"kubernetes.io/projected/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-kube-api-access-fsmd9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.341327 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.341360 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.342397 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.342689 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.342939 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.346274 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.346969 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.347605 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.349205 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.349410 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.350818 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.351124 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.354879 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.356419 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.362627 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.373677 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsmd9\" (UniqueName: \"kubernetes.io/projected/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-kube-api-access-fsmd9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.373731 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2f7\" (UniqueName: \"kubernetes.io/projected/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-kube-api-access-lx2f7\") pod \"watcher-kuttl-applier-0\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.374215 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx4sn\" (UniqueName: \"kubernetes.io/projected/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-kube-api-access-kx4sn\") pod \"watcher-kuttl-api-0\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.396051 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.496845 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.503714 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:04 crc kubenswrapper[4796]: I1202 20:35:04.896759 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:35:04 crc kubenswrapper[4796]: W1202 20:35:04.900220 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca397d3a_cbff_4faf_b80d_a6eab99fc47a.slice/crio-2a7dc99429d5e42c3dc1a76551401d85ad095a6baab6ea2f5f73e979442a3c84 WatchSource:0}: Error finding container 2a7dc99429d5e42c3dc1a76551401d85ad095a6baab6ea2f5f73e979442a3c84: Status 404 returned error can't find the container with id 2a7dc99429d5e42c3dc1a76551401d85ad095a6baab6ea2f5f73e979442a3c84 Dec 02 20:35:05 crc kubenswrapper[4796]: I1202 20:35:05.042488 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:05 crc kubenswrapper[4796]: W1202 20:35:05.047978 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d5e8970_f21a_4be3_b1b9_41dacca9e94c.slice/crio-0b27e6a8411a4f9f2acb36ec842f8c805ddfbb0c1e856f9d398f2019dd6eec07 WatchSource:0}: Error finding container 0b27e6a8411a4f9f2acb36ec842f8c805ddfbb0c1e856f9d398f2019dd6eec07: Status 404 returned error can't find the container with id 0b27e6a8411a4f9f2acb36ec842f8c805ddfbb0c1e856f9d398f2019dd6eec07 Dec 02 20:35:05 crc kubenswrapper[4796]: W1202 20:35:05.140450 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f0bd83e_fe69_4c53_8c68_dd61c3d8ea80.slice/crio-99f22147e861db4d05261d663cda26d417bde798684203584ed7362c4dd2c5ff WatchSource:0}: Error finding container 99f22147e861db4d05261d663cda26d417bde798684203584ed7362c4dd2c5ff: Status 404 returned error can't find the container with id 99f22147e861db4d05261d663cda26d417bde798684203584ed7362c4dd2c5ff Dec 02 20:35:05 crc kubenswrapper[4796]: I1202 20:35:05.142729 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:35:05 crc kubenswrapper[4796]: I1202 20:35:05.827150 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2d5e8970-f21a-4be3-b1b9-41dacca9e94c","Type":"ContainerStarted","Data":"27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e"} Dec 02 20:35:05 crc kubenswrapper[4796]: I1202 20:35:05.827573 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2d5e8970-f21a-4be3-b1b9-41dacca9e94c","Type":"ContainerStarted","Data":"0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f"} Dec 02 20:35:05 crc kubenswrapper[4796]: I1202 20:35:05.827595 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2d5e8970-f21a-4be3-b1b9-41dacca9e94c","Type":"ContainerStarted","Data":"0b27e6a8411a4f9f2acb36ec842f8c805ddfbb0c1e856f9d398f2019dd6eec07"} Dec 02 20:35:05 crc kubenswrapper[4796]: I1202 20:35:05.830577 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80","Type":"ContainerStarted","Data":"db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472"} Dec 02 20:35:05 crc kubenswrapper[4796]: I1202 20:35:05.830626 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80","Type":"ContainerStarted","Data":"99f22147e861db4d05261d663cda26d417bde798684203584ed7362c4dd2c5ff"} Dec 02 20:35:05 crc kubenswrapper[4796]: I1202 20:35:05.833289 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ca397d3a-cbff-4faf-b80d-a6eab99fc47a","Type":"ContainerStarted","Data":"f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38"} Dec 02 20:35:05 crc kubenswrapper[4796]: I1202 20:35:05.833333 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ca397d3a-cbff-4faf-b80d-a6eab99fc47a","Type":"ContainerStarted","Data":"2a7dc99429d5e42c3dc1a76551401d85ad095a6baab6ea2f5f73e979442a3c84"} Dec 02 20:35:05 crc kubenswrapper[4796]: I1202 20:35:05.852304 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.852278216 podStartE2EDuration="1.852278216s" podCreationTimestamp="2025-12-02 20:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:35:05.851138729 +0000 UTC m=+1388.854514283" watchObservedRunningTime="2025-12-02 20:35:05.852278216 +0000 UTC m=+1388.855653750" Dec 02 20:35:05 crc kubenswrapper[4796]: I1202 20:35:05.907029 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.906992394 podStartE2EDuration="1.906992394s" podCreationTimestamp="2025-12-02 20:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:35:05.877856248 +0000 UTC m=+1388.881231782" watchObservedRunningTime="2025-12-02 20:35:05.906992394 +0000 UTC m=+1388.910367968" Dec 02 20:35:05 crc kubenswrapper[4796]: I1202 20:35:05.914705 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.914686841 podStartE2EDuration="1.914686841s" podCreationTimestamp="2025-12-02 20:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:35:05.904397202 +0000 UTC m=+1388.907772756" watchObservedRunningTime="2025-12-02 20:35:05.914686841 +0000 UTC m=+1388.918062375" Dec 02 20:35:06 crc kubenswrapper[4796]: I1202 20:35:06.841837 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:09 crc kubenswrapper[4796]: I1202 20:35:09.201076 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:09 crc kubenswrapper[4796]: I1202 20:35:09.396665 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:09 crc kubenswrapper[4796]: I1202 20:35:09.496968 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:10 crc kubenswrapper[4796]: I1202 20:35:10.587144 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:35:10 crc kubenswrapper[4796]: I1202 20:35:10.655108 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:35:14 crc kubenswrapper[4796]: I1202 20:35:14.397079 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:14 crc kubenswrapper[4796]: I1202 20:35:14.446203 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:14 crc kubenswrapper[4796]: I1202 20:35:14.497667 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:14 crc kubenswrapper[4796]: I1202 20:35:14.503894 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:14 crc kubenswrapper[4796]: I1202 20:35:14.511595 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:14 crc kubenswrapper[4796]: I1202 20:35:14.541126 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:14 crc kubenswrapper[4796]: I1202 20:35:14.920465 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:14 crc kubenswrapper[4796]: I1202 20:35:14.935993 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:14 crc kubenswrapper[4796]: I1202 20:35:14.954899 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:14 crc kubenswrapper[4796]: I1202 20:35:14.983337 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:14 crc kubenswrapper[4796]: I1202 20:35:14.991090 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5s4k"] Dec 02 20:35:14 crc kubenswrapper[4796]: I1202 20:35:14.991652 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t5s4k" podUID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" containerName="registry-server" containerID="cri-o://6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf" gracePeriod=2 Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.463640 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.560398 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/147cf58f-968f-47fd-9f5d-998c7c48b8f6-utilities\") pod \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\" (UID: \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\") " Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.560882 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/147cf58f-968f-47fd-9f5d-998c7c48b8f6-catalog-content\") pod \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\" (UID: \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\") " Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.560969 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzlmq\" (UniqueName: \"kubernetes.io/projected/147cf58f-968f-47fd-9f5d-998c7c48b8f6-kube-api-access-rzlmq\") pod \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\" (UID: \"147cf58f-968f-47fd-9f5d-998c7c48b8f6\") " Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.562430 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/147cf58f-968f-47fd-9f5d-998c7c48b8f6-utilities" (OuterVolumeSpecName: "utilities") pod "147cf58f-968f-47fd-9f5d-998c7c48b8f6" (UID: "147cf58f-968f-47fd-9f5d-998c7c48b8f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.567397 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147cf58f-968f-47fd-9f5d-998c7c48b8f6-kube-api-access-rzlmq" (OuterVolumeSpecName: "kube-api-access-rzlmq") pod "147cf58f-968f-47fd-9f5d-998c7c48b8f6" (UID: "147cf58f-968f-47fd-9f5d-998c7c48b8f6"). InnerVolumeSpecName "kube-api-access-rzlmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.660705 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/147cf58f-968f-47fd-9f5d-998c7c48b8f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "147cf58f-968f-47fd-9f5d-998c7c48b8f6" (UID: "147cf58f-968f-47fd-9f5d-998c7c48b8f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.663936 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/147cf58f-968f-47fd-9f5d-998c7c48b8f6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.663995 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzlmq\" (UniqueName: \"kubernetes.io/projected/147cf58f-968f-47fd-9f5d-998c7c48b8f6-kube-api-access-rzlmq\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.664020 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/147cf58f-968f-47fd-9f5d-998c7c48b8f6-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.930285 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5s4k" Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.930297 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5s4k" event={"ID":"147cf58f-968f-47fd-9f5d-998c7c48b8f6","Type":"ContainerDied","Data":"6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf"} Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.930437 4796 scope.go:117] "RemoveContainer" containerID="6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf" Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.931317 4796 generic.go:334] "Generic (PLEG): container finished" podID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" containerID="6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf" exitCode=0 Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.931453 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5s4k" event={"ID":"147cf58f-968f-47fd-9f5d-998c7c48b8f6","Type":"ContainerDied","Data":"23cee656a7dac0e2d76e79d2716227f316ff9b6c14fadcd83830571928400149"} Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.953465 4796 scope.go:117] "RemoveContainer" containerID="069a5649a34836b5c5df5dfa4bcc01350f21c190191fcb443d8f3f349dd7f051" Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.973319 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5s4k"] Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.980302 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t5s4k"] Dec 02 20:35:15 crc kubenswrapper[4796]: I1202 20:35:15.992758 4796 scope.go:117] "RemoveContainer" containerID="102bc3af96b38b4e7c6a60957df092a1bcbfc02a047e8ec5f8ada4947572155a" Dec 02 20:35:16 crc kubenswrapper[4796]: I1202 20:35:16.030971 4796 scope.go:117] "RemoveContainer" containerID="6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf" Dec 02 20:35:16 crc kubenswrapper[4796]: E1202 20:35:16.031838 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf\": container with ID starting with 6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf not found: ID does not exist" containerID="6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf" Dec 02 20:35:16 crc kubenswrapper[4796]: I1202 20:35:16.031889 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf"} err="failed to get container status \"6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf\": rpc error: code = NotFound desc = could not find container \"6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf\": container with ID starting with 6a6773d57495e676ece46b0bc5f532ff3d62c8b565937f0952a01a0e934fcdcf not found: ID does not exist" Dec 02 20:35:16 crc kubenswrapper[4796]: I1202 20:35:16.031923 4796 scope.go:117] "RemoveContainer" containerID="069a5649a34836b5c5df5dfa4bcc01350f21c190191fcb443d8f3f349dd7f051" Dec 02 20:35:16 crc kubenswrapper[4796]: E1202 20:35:16.032365 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"069a5649a34836b5c5df5dfa4bcc01350f21c190191fcb443d8f3f349dd7f051\": container with ID starting with 069a5649a34836b5c5df5dfa4bcc01350f21c190191fcb443d8f3f349dd7f051 not found: ID does not exist" containerID="069a5649a34836b5c5df5dfa4bcc01350f21c190191fcb443d8f3f349dd7f051" Dec 02 20:35:16 crc kubenswrapper[4796]: I1202 20:35:16.032424 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"069a5649a34836b5c5df5dfa4bcc01350f21c190191fcb443d8f3f349dd7f051"} err="failed to get container status \"069a5649a34836b5c5df5dfa4bcc01350f21c190191fcb443d8f3f349dd7f051\": rpc error: code = NotFound desc = could not find container \"069a5649a34836b5c5df5dfa4bcc01350f21c190191fcb443d8f3f349dd7f051\": container with ID starting with 069a5649a34836b5c5df5dfa4bcc01350f21c190191fcb443d8f3f349dd7f051 not found: ID does not exist" Dec 02 20:35:16 crc kubenswrapper[4796]: I1202 20:35:16.032460 4796 scope.go:117] "RemoveContainer" containerID="102bc3af96b38b4e7c6a60957df092a1bcbfc02a047e8ec5f8ada4947572155a" Dec 02 20:35:16 crc kubenswrapper[4796]: E1202 20:35:16.032800 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"102bc3af96b38b4e7c6a60957df092a1bcbfc02a047e8ec5f8ada4947572155a\": container with ID starting with 102bc3af96b38b4e7c6a60957df092a1bcbfc02a047e8ec5f8ada4947572155a not found: ID does not exist" containerID="102bc3af96b38b4e7c6a60957df092a1bcbfc02a047e8ec5f8ada4947572155a" Dec 02 20:35:16 crc kubenswrapper[4796]: I1202 20:35:16.032819 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"102bc3af96b38b4e7c6a60957df092a1bcbfc02a047e8ec5f8ada4947572155a"} err="failed to get container status \"102bc3af96b38b4e7c6a60957df092a1bcbfc02a047e8ec5f8ada4947572155a\": rpc error: code = NotFound desc = could not find container \"102bc3af96b38b4e7c6a60957df092a1bcbfc02a047e8ec5f8ada4947572155a\": container with ID starting with 102bc3af96b38b4e7c6a60957df092a1bcbfc02a047e8ec5f8ada4947572155a not found: ID does not exist" Dec 02 20:35:17 crc kubenswrapper[4796]: I1202 20:35:17.187452 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:17 crc kubenswrapper[4796]: I1202 20:35:17.286915 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" path="/var/lib/kubelet/pods/147cf58f-968f-47fd-9f5d-998c7c48b8f6/volumes" Dec 02 20:35:17 crc kubenswrapper[4796]: I1202 20:35:17.488185 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:17 crc kubenswrapper[4796]: I1202 20:35:17.683038 4796 scope.go:117] "RemoveContainer" containerID="5021094da3f08a8f13e263e3c6bd92501c79b859c5434ca22038eeb2fd948e55" Dec 02 20:35:17 crc kubenswrapper[4796]: I1202 20:35:17.717536 4796 scope.go:117] "RemoveContainer" containerID="ed7ceec4dc6ad9fa5b7cc84e6de794343b7d522dc3537de8792b5475f016ffd5" Dec 02 20:35:17 crc kubenswrapper[4796]: I1202 20:35:17.955712 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="sg-core" containerID="cri-o://38f1bd216e6c011205e74359631a3b6cdc6038ce0595aaac314a1918280d59cf" gracePeriod=30 Dec 02 20:35:17 crc kubenswrapper[4796]: I1202 20:35:17.955783 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="ceilometer-notification-agent" containerID="cri-o://0942b1d20d7999e7c0d9359ad174f9aae6a26be02288ee0661b7b9626c06eb9d" gracePeriod=30 Dec 02 20:35:17 crc kubenswrapper[4796]: I1202 20:35:17.955815 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="proxy-httpd" containerID="cri-o://a69ff185a3f62ee1c461bd16f59804a28090e1b92e2155d020eab543fb8b41be" gracePeriod=30 Dec 02 20:35:17 crc kubenswrapper[4796]: I1202 20:35:17.955938 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="ceilometer-central-agent" containerID="cri-o://673d32932b6940ecd3881ed24e418b5938ec62153edc554d7fba020018af2618" gracePeriod=30 Dec 02 20:35:18 crc kubenswrapper[4796]: I1202 20:35:18.762201 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:18 crc kubenswrapper[4796]: I1202 20:35:18.766033 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" containerName="watcher-kuttl-api-log" containerID="cri-o://0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f" gracePeriod=30 Dec 02 20:35:18 crc kubenswrapper[4796]: I1202 20:35:18.766057 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" containerName="watcher-api" containerID="cri-o://27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e" gracePeriod=30 Dec 02 20:35:18 crc kubenswrapper[4796]: I1202 20:35:18.965860 4796 generic.go:334] "Generic (PLEG): container finished" podID="3f4fa31b-730e-4c93-b54f-558da1518581" containerID="a69ff185a3f62ee1c461bd16f59804a28090e1b92e2155d020eab543fb8b41be" exitCode=0 Dec 02 20:35:18 crc kubenswrapper[4796]: I1202 20:35:18.965904 4796 generic.go:334] "Generic (PLEG): container finished" podID="3f4fa31b-730e-4c93-b54f-558da1518581" containerID="38f1bd216e6c011205e74359631a3b6cdc6038ce0595aaac314a1918280d59cf" exitCode=2 Dec 02 20:35:18 crc kubenswrapper[4796]: I1202 20:35:18.965913 4796 generic.go:334] "Generic (PLEG): container finished" podID="3f4fa31b-730e-4c93-b54f-558da1518581" containerID="673d32932b6940ecd3881ed24e418b5938ec62153edc554d7fba020018af2618" exitCode=0 Dec 02 20:35:18 crc kubenswrapper[4796]: I1202 20:35:18.965962 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3f4fa31b-730e-4c93-b54f-558da1518581","Type":"ContainerDied","Data":"a69ff185a3f62ee1c461bd16f59804a28090e1b92e2155d020eab543fb8b41be"} Dec 02 20:35:18 crc kubenswrapper[4796]: I1202 20:35:18.966040 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3f4fa31b-730e-4c93-b54f-558da1518581","Type":"ContainerDied","Data":"38f1bd216e6c011205e74359631a3b6cdc6038ce0595aaac314a1918280d59cf"} Dec 02 20:35:18 crc kubenswrapper[4796]: I1202 20:35:18.966054 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3f4fa31b-730e-4c93-b54f-558da1518581","Type":"ContainerDied","Data":"673d32932b6940ecd3881ed24e418b5938ec62153edc554d7fba020018af2618"} Dec 02 20:35:18 crc kubenswrapper[4796]: I1202 20:35:18.967789 4796 generic.go:334] "Generic (PLEG): container finished" podID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" containerID="0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f" exitCode=143 Dec 02 20:35:18 crc kubenswrapper[4796]: I1202 20:35:18.967823 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2d5e8970-f21a-4be3-b1b9-41dacca9e94c","Type":"ContainerDied","Data":"0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f"} Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.693151 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.742426 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-combined-ca-bundle\") pod \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.742522 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-config-data\") pod \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.742577 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-public-tls-certs\") pod \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.742601 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-custom-prometheus-ca\") pod \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.742679 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-logs\") pod \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.742817 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-internal-tls-certs\") pod \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.742958 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx4sn\" (UniqueName: \"kubernetes.io/projected/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-kube-api-access-kx4sn\") pod \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\" (UID: \"2d5e8970-f21a-4be3-b1b9-41dacca9e94c\") " Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.744894 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-logs" (OuterVolumeSpecName: "logs") pod "2d5e8970-f21a-4be3-b1b9-41dacca9e94c" (UID: "2d5e8970-f21a-4be3-b1b9-41dacca9e94c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.775193 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-kube-api-access-kx4sn" (OuterVolumeSpecName: "kube-api-access-kx4sn") pod "2d5e8970-f21a-4be3-b1b9-41dacca9e94c" (UID: "2d5e8970-f21a-4be3-b1b9-41dacca9e94c"). InnerVolumeSpecName "kube-api-access-kx4sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.789647 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d5e8970-f21a-4be3-b1b9-41dacca9e94c" (UID: "2d5e8970-f21a-4be3-b1b9-41dacca9e94c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.792713 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "2d5e8970-f21a-4be3-b1b9-41dacca9e94c" (UID: "2d5e8970-f21a-4be3-b1b9-41dacca9e94c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.816414 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2d5e8970-f21a-4be3-b1b9-41dacca9e94c" (UID: "2d5e8970-f21a-4be3-b1b9-41dacca9e94c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.833630 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2d5e8970-f21a-4be3-b1b9-41dacca9e94c" (UID: "2d5e8970-f21a-4be3-b1b9-41dacca9e94c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.845112 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx4sn\" (UniqueName: \"kubernetes.io/projected/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-kube-api-access-kx4sn\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.845146 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.845156 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.845165 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.845177 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.845187 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.872641 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-config-data" (OuterVolumeSpecName: "config-data") pod "2d5e8970-f21a-4be3-b1b9-41dacca9e94c" (UID: "2d5e8970-f21a-4be3-b1b9-41dacca9e94c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.946621 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5e8970-f21a-4be3-b1b9-41dacca9e94c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.978124 4796 generic.go:334] "Generic (PLEG): container finished" podID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" containerID="27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e" exitCode=0 Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.978184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2d5e8970-f21a-4be3-b1b9-41dacca9e94c","Type":"ContainerDied","Data":"27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e"} Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.978220 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2d5e8970-f21a-4be3-b1b9-41dacca9e94c","Type":"ContainerDied","Data":"0b27e6a8411a4f9f2acb36ec842f8c805ddfbb0c1e856f9d398f2019dd6eec07"} Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.978242 4796 scope.go:117] "RemoveContainer" containerID="27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e" Dec 02 20:35:19 crc kubenswrapper[4796]: I1202 20:35:19.978407 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.024624 4796 scope.go:117] "RemoveContainer" containerID="0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.043787 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.055368 4796 scope.go:117] "RemoveContainer" containerID="27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e" Dec 02 20:35:20 crc kubenswrapper[4796]: E1202 20:35:20.056372 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e\": container with ID starting with 27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e not found: ID does not exist" containerID="27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.056594 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e"} err="failed to get container status \"27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e\": rpc error: code = NotFound desc = could not find container \"27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e\": container with ID starting with 27ad1a6cb2947f3cdc29611a8f6617829a0fc34a21f882cf150da1355faaaa9e not found: ID does not exist" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.056727 4796 scope.go:117] "RemoveContainer" containerID="0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.057002 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:20 crc kubenswrapper[4796]: E1202 20:35:20.057516 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f\": container with ID starting with 0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f not found: ID does not exist" containerID="0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.057554 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f"} err="failed to get container status \"0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f\": rpc error: code = NotFound desc = could not find container \"0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f\": container with ID starting with 0bbd068a490a68171976b1f41d78510d0db13aeabb59064391e64d8b02f65f4f not found: ID does not exist" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.072308 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:20 crc kubenswrapper[4796]: E1202 20:35:20.072734 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" containerName="extract-utilities" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.072757 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" containerName="extract-utilities" Dec 02 20:35:20 crc kubenswrapper[4796]: E1202 20:35:20.072773 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" containerName="watcher-kuttl-api-log" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.072781 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" containerName="watcher-kuttl-api-log" Dec 02 20:35:20 crc kubenswrapper[4796]: E1202 20:35:20.072799 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" containerName="registry-server" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.072806 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" containerName="registry-server" Dec 02 20:35:20 crc kubenswrapper[4796]: E1202 20:35:20.072829 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" containerName="watcher-api" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.072835 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" containerName="watcher-api" Dec 02 20:35:20 crc kubenswrapper[4796]: E1202 20:35:20.072849 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" containerName="extract-content" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.072856 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" containerName="extract-content" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.073042 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" containerName="watcher-kuttl-api-log" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.073066 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" containerName="watcher-api" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.073075 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="147cf58f-968f-47fd-9f5d-998c7c48b8f6" containerName="registry-server" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.074084 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.077923 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.078017 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.078284 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.082130 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.152347 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.152402 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq8nl\" (UniqueName: \"kubernetes.io/projected/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-kube-api-access-nq8nl\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.152503 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.152537 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.152581 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.152614 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.152633 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.253705 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.253774 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.253841 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.254612 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.254721 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.254976 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.255059 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq8nl\" (UniqueName: \"kubernetes.io/projected/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-kube-api-access-nq8nl\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.255437 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.258491 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.259373 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.259767 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.259781 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.260186 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.280694 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq8nl\" (UniqueName: \"kubernetes.io/projected/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-kube-api-access-nq8nl\") pod \"watcher-kuttl-api-0\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.390604 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:20 crc kubenswrapper[4796]: I1202 20:35:20.897893 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:21 crc kubenswrapper[4796]: I1202 20:35:21.011909 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9f952e5c-16ef-4af8-9192-4888cf1ad0dc","Type":"ContainerStarted","Data":"944cc74de93d04d5eb4358004c2f07d1328ec7f7f461882197a3a6edb5f496f9"} Dec 02 20:35:21 crc kubenswrapper[4796]: I1202 20:35:21.291731 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" path="/var/lib/kubelet/pods/2d5e8970-f21a-4be3-b1b9-41dacca9e94c/volumes" Dec 02 20:35:21 crc kubenswrapper[4796]: E1202 20:35:21.907266 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f4fa31b_730e_4c93_b54f_558da1518581.slice/crio-0942b1d20d7999e7c0d9359ad174f9aae6a26be02288ee0661b7b9626c06eb9d.scope\": RecentStats: unable to find data in memory cache]" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.029805 4796 generic.go:334] "Generic (PLEG): container finished" podID="3f4fa31b-730e-4c93-b54f-558da1518581" containerID="0942b1d20d7999e7c0d9359ad174f9aae6a26be02288ee0661b7b9626c06eb9d" exitCode=0 Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.029860 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3f4fa31b-730e-4c93-b54f-558da1518581","Type":"ContainerDied","Data":"0942b1d20d7999e7c0d9359ad174f9aae6a26be02288ee0661b7b9626c06eb9d"} Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.032525 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9f952e5c-16ef-4af8-9192-4888cf1ad0dc","Type":"ContainerStarted","Data":"0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295"} Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.032563 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9f952e5c-16ef-4af8-9192-4888cf1ad0dc","Type":"ContainerStarted","Data":"7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d"} Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.033730 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.188033 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7jml5"] Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.240440 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7jml5"] Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.260108 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher2f14-account-delete-4vw8t"] Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.261745 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.264911 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.264887987 podStartE2EDuration="2.264887987s" podCreationTimestamp="2025-12-02 20:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:35:22.089965382 +0000 UTC m=+1405.093340916" watchObservedRunningTime="2025-12-02 20:35:22.264887987 +0000 UTC m=+1405.268263521" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.281798 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher2f14-account-delete-4vw8t"] Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.305312 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.305630 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="ca397d3a-cbff-4faf-b80d-a6eab99fc47a" containerName="watcher-applier" containerID="cri-o://f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38" gracePeriod=30 Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.311461 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.311760 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80" containerName="watcher-decision-engine" containerID="cri-o://db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472" gracePeriod=30 Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.326359 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.404306 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.422149 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz6w8\" (UniqueName: \"kubernetes.io/projected/77bb45a7-f635-4ce6-9404-10942e3301ce-kube-api-access-zz6w8\") pod \"watcher2f14-account-delete-4vw8t\" (UID: \"77bb45a7-f635-4ce6-9404-10942e3301ce\") " pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.422208 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77bb45a7-f635-4ce6-9404-10942e3301ce-operator-scripts\") pod \"watcher2f14-account-delete-4vw8t\" (UID: \"77bb45a7-f635-4ce6-9404-10942e3301ce\") " pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.523090 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49cq2\" (UniqueName: \"kubernetes.io/projected/3f4fa31b-730e-4c93-b54f-558da1518581-kube-api-access-49cq2\") pod \"3f4fa31b-730e-4c93-b54f-558da1518581\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.523171 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f4fa31b-730e-4c93-b54f-558da1518581-run-httpd\") pod \"3f4fa31b-730e-4c93-b54f-558da1518581\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.523269 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-ceilometer-tls-certs\") pod \"3f4fa31b-730e-4c93-b54f-558da1518581\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.523299 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-sg-core-conf-yaml\") pod \"3f4fa31b-730e-4c93-b54f-558da1518581\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.523322 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-scripts\") pod \"3f4fa31b-730e-4c93-b54f-558da1518581\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.523381 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-config-data\") pod \"3f4fa31b-730e-4c93-b54f-558da1518581\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.523486 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-combined-ca-bundle\") pod \"3f4fa31b-730e-4c93-b54f-558da1518581\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.523516 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f4fa31b-730e-4c93-b54f-558da1518581-log-httpd\") pod \"3f4fa31b-730e-4c93-b54f-558da1518581\" (UID: \"3f4fa31b-730e-4c93-b54f-558da1518581\") " Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.523613 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4fa31b-730e-4c93-b54f-558da1518581-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f4fa31b-730e-4c93-b54f-558da1518581" (UID: "3f4fa31b-730e-4c93-b54f-558da1518581"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.523860 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz6w8\" (UniqueName: \"kubernetes.io/projected/77bb45a7-f635-4ce6-9404-10942e3301ce-kube-api-access-zz6w8\") pod \"watcher2f14-account-delete-4vw8t\" (UID: \"77bb45a7-f635-4ce6-9404-10942e3301ce\") " pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.523909 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77bb45a7-f635-4ce6-9404-10942e3301ce-operator-scripts\") pod \"watcher2f14-account-delete-4vw8t\" (UID: \"77bb45a7-f635-4ce6-9404-10942e3301ce\") " pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.524010 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f4fa31b-730e-4c93-b54f-558da1518581-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.524239 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4fa31b-730e-4c93-b54f-558da1518581-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f4fa31b-730e-4c93-b54f-558da1518581" (UID: "3f4fa31b-730e-4c93-b54f-558da1518581"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.524699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77bb45a7-f635-4ce6-9404-10942e3301ce-operator-scripts\") pod \"watcher2f14-account-delete-4vw8t\" (UID: \"77bb45a7-f635-4ce6-9404-10942e3301ce\") " pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.529403 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4fa31b-730e-4c93-b54f-558da1518581-kube-api-access-49cq2" (OuterVolumeSpecName: "kube-api-access-49cq2") pod "3f4fa31b-730e-4c93-b54f-558da1518581" (UID: "3f4fa31b-730e-4c93-b54f-558da1518581"). InnerVolumeSpecName "kube-api-access-49cq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.530687 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-scripts" (OuterVolumeSpecName: "scripts") pod "3f4fa31b-730e-4c93-b54f-558da1518581" (UID: "3f4fa31b-730e-4c93-b54f-558da1518581"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.546064 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz6w8\" (UniqueName: \"kubernetes.io/projected/77bb45a7-f635-4ce6-9404-10942e3301ce-kube-api-access-zz6w8\") pod \"watcher2f14-account-delete-4vw8t\" (UID: \"77bb45a7-f635-4ce6-9404-10942e3301ce\") " pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.568432 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3f4fa31b-730e-4c93-b54f-558da1518581" (UID: "3f4fa31b-730e-4c93-b54f-558da1518581"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.594193 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3f4fa31b-730e-4c93-b54f-558da1518581" (UID: "3f4fa31b-730e-4c93-b54f-558da1518581"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.612360 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f4fa31b-730e-4c93-b54f-558da1518581" (UID: "3f4fa31b-730e-4c93-b54f-558da1518581"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.617134 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.625743 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.625877 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.625964 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.626034 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.626102 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f4fa31b-730e-4c93-b54f-558da1518581-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.626170 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49cq2\" (UniqueName: \"kubernetes.io/projected/3f4fa31b-730e-4c93-b54f-558da1518581-kube-api-access-49cq2\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.655907 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-config-data" (OuterVolumeSpecName: "config-data") pod "3f4fa31b-730e-4c93-b54f-558da1518581" (UID: "3f4fa31b-730e-4c93-b54f-558da1518581"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:22 crc kubenswrapper[4796]: I1202 20:35:22.731494 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4fa31b-730e-4c93-b54f-558da1518581-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.081340 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3f4fa31b-730e-4c93-b54f-558da1518581","Type":"ContainerDied","Data":"7bc9ea5125ed23bd927550c864cbf5ea377c59c9c4487beb10998ff809e678a3"} Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.081423 4796 scope.go:117] "RemoveContainer" containerID="a69ff185a3f62ee1c461bd16f59804a28090e1b92e2155d020eab543fb8b41be" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.081361 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.081823 4796 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-api-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-d5gx7\" not found" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.102190 4796 scope.go:117] "RemoveContainer" containerID="38f1bd216e6c011205e74359631a3b6cdc6038ce0595aaac314a1918280d59cf" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.121024 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.136225 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.146077 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher2f14-account-delete-4vw8t"] Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.156199 4796 scope.go:117] "RemoveContainer" containerID="0942b1d20d7999e7c0d9359ad174f9aae6a26be02288ee0661b7b9626c06eb9d" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.158326 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:23 crc kubenswrapper[4796]: E1202 20:35:23.158872 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="ceilometer-central-agent" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.158898 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="ceilometer-central-agent" Dec 02 20:35:23 crc kubenswrapper[4796]: E1202 20:35:23.158945 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="sg-core" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.158955 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="sg-core" Dec 02 20:35:23 crc kubenswrapper[4796]: E1202 20:35:23.158966 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="proxy-httpd" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.158973 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="proxy-httpd" Dec 02 20:35:23 crc kubenswrapper[4796]: E1202 20:35:23.158988 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="ceilometer-notification-agent" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.159009 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="ceilometer-notification-agent" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.159228 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="sg-core" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.159271 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="ceilometer-notification-agent" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.159286 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="proxy-httpd" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.159306 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" containerName="ceilometer-central-agent" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.161443 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.168310 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.168383 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.168433 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.194234 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.226745 4796 scope.go:117] "RemoveContainer" containerID="673d32932b6940ecd3881ed24e418b5938ec62153edc554d7fba020018af2618" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.239830 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-config-data\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.240031 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-log-httpd\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.240061 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzn55\" (UniqueName: \"kubernetes.io/projected/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-kube-api-access-rzn55\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.240889 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: E1202 20:35:23.240957 4796 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.240986 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-scripts\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: E1202 20:35:23.241038 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data podName:9f952e5c-16ef-4af8-9192-4888cf1ad0dc nodeName:}" failed. No retries permitted until 2025-12-02 20:35:23.741013393 +0000 UTC m=+1406.744389127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data") pod "watcher-kuttl-api-0" (UID: "9f952e5c-16ef-4af8-9192-4888cf1ad0dc") : secret "watcher-kuttl-api-config-data" not found Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.241074 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.241166 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-run-httpd\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.241408 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.275144 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4fa31b-730e-4c93-b54f-558da1518581" path="/var/lib/kubelet/pods/3f4fa31b-730e-4c93-b54f-558da1518581/volumes" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.276023 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="501c59b2-9c90-4fde-bc60-d15ab7b63cff" path="/var/lib/kubelet/pods/501c59b2-9c90-4fde-bc60-d15ab7b63cff/volumes" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.345838 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-config-data\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.345979 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-log-httpd\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.346024 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzn55\" (UniqueName: \"kubernetes.io/projected/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-kube-api-access-rzn55\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.346118 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.346162 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-scripts\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.346186 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.346218 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-run-httpd\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.346339 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.347851 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-log-httpd\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.348610 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-run-httpd\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.353612 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.353824 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-config-data\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.354192 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.356355 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-scripts\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.357047 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.366356 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzn55\" (UniqueName: \"kubernetes.io/projected/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-kube-api-access-rzn55\") pod \"ceilometer-0\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.492636 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:23 crc kubenswrapper[4796]: E1202 20:35:23.753996 4796 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Dec 02 20:35:23 crc kubenswrapper[4796]: E1202 20:35:23.754367 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data podName:9f952e5c-16ef-4af8-9192-4888cf1ad0dc nodeName:}" failed. No retries permitted until 2025-12-02 20:35:24.754345069 +0000 UTC m=+1407.757720613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data") pod "watcher-kuttl-api-0" (UID: "9f952e5c-16ef-4af8-9192-4888cf1ad0dc") : secret "watcher-kuttl-api-config-data" not found Dec 02 20:35:23 crc kubenswrapper[4796]: I1202 20:35:23.994941 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:24 crc kubenswrapper[4796]: I1202 20:35:24.091126 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6a5c8c2-e2e3-4636-bfd4-126009a482f1","Type":"ContainerStarted","Data":"0fd8bd8ac18511c9876e6a487bcf1f6f226b462a2217cf1454e2b58761201b7b"} Dec 02 20:35:24 crc kubenswrapper[4796]: I1202 20:35:24.093464 4796 generic.go:334] "Generic (PLEG): container finished" podID="77bb45a7-f635-4ce6-9404-10942e3301ce" containerID="9e9ec0ec6676b65ca53851c5c4195d0332d595aca2b1d5aced2cba3d2141287f" exitCode=0 Dec 02 20:35:24 crc kubenswrapper[4796]: I1202 20:35:24.093584 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" event={"ID":"77bb45a7-f635-4ce6-9404-10942e3301ce","Type":"ContainerDied","Data":"9e9ec0ec6676b65ca53851c5c4195d0332d595aca2b1d5aced2cba3d2141287f"} Dec 02 20:35:24 crc kubenswrapper[4796]: I1202 20:35:24.093624 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" event={"ID":"77bb45a7-f635-4ce6-9404-10942e3301ce","Type":"ContainerStarted","Data":"2e07e7b9ed39bb9c1bcd01cbe6a06c8c2e644b8c12ae2c6ef82a1aa3ccf7b5d7"} Dec 02 20:35:24 crc kubenswrapper[4796]: I1202 20:35:24.095056 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 20:35:24 crc kubenswrapper[4796]: I1202 20:35:24.095302 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerName="watcher-kuttl-api-log" containerID="cri-o://7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d" gracePeriod=30 Dec 02 20:35:24 crc kubenswrapper[4796]: I1202 20:35:24.095354 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerName="watcher-api" containerID="cri-o://0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295" gracePeriod=30 Dec 02 20:35:24 crc kubenswrapper[4796]: I1202 20:35:24.101016 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.152:9322/\": EOF" Dec 02 20:35:24 crc kubenswrapper[4796]: E1202 20:35:24.398509 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:35:24 crc kubenswrapper[4796]: E1202 20:35:24.400172 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:35:24 crc kubenswrapper[4796]: E1202 20:35:24.407405 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:35:24 crc kubenswrapper[4796]: E1202 20:35:24.407450 4796 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="ca397d3a-cbff-4faf-b80d-a6eab99fc47a" containerName="watcher-applier" Dec 02 20:35:24 crc kubenswrapper[4796]: I1202 20:35:24.497493 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.150:9322/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 20:35:24 crc kubenswrapper[4796]: I1202 20:35:24.497523 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2d5e8970-f21a-4be3-b1b9-41dacca9e94c" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.150:9322/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 20:35:24 crc kubenswrapper[4796]: I1202 20:35:24.583101 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:24 crc kubenswrapper[4796]: E1202 20:35:24.770573 4796 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Dec 02 20:35:24 crc kubenswrapper[4796]: E1202 20:35:24.770667 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data podName:9f952e5c-16ef-4af8-9192-4888cf1ad0dc nodeName:}" failed. No retries permitted until 2025-12-02 20:35:26.77064443 +0000 UTC m=+1409.774019974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data") pod "watcher-kuttl-api-0" (UID: "9f952e5c-16ef-4af8-9192-4888cf1ad0dc") : secret "watcher-kuttl-api-config-data" not found Dec 02 20:35:25 crc kubenswrapper[4796]: I1202 20:35:25.121960 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6a5c8c2-e2e3-4636-bfd4-126009a482f1","Type":"ContainerStarted","Data":"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7"} Dec 02 20:35:25 crc kubenswrapper[4796]: I1202 20:35:25.125233 4796 generic.go:334] "Generic (PLEG): container finished" podID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerID="7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d" exitCode=143 Dec 02 20:35:25 crc kubenswrapper[4796]: I1202 20:35:25.125420 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9f952e5c-16ef-4af8-9192-4888cf1ad0dc","Type":"ContainerDied","Data":"7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d"} Dec 02 20:35:25 crc kubenswrapper[4796]: I1202 20:35:25.391582 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:25 crc kubenswrapper[4796]: I1202 20:35:25.529175 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" Dec 02 20:35:25 crc kubenswrapper[4796]: I1202 20:35:25.585178 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz6w8\" (UniqueName: \"kubernetes.io/projected/77bb45a7-f635-4ce6-9404-10942e3301ce-kube-api-access-zz6w8\") pod \"77bb45a7-f635-4ce6-9404-10942e3301ce\" (UID: \"77bb45a7-f635-4ce6-9404-10942e3301ce\") " Dec 02 20:35:25 crc kubenswrapper[4796]: I1202 20:35:25.585284 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77bb45a7-f635-4ce6-9404-10942e3301ce-operator-scripts\") pod \"77bb45a7-f635-4ce6-9404-10942e3301ce\" (UID: \"77bb45a7-f635-4ce6-9404-10942e3301ce\") " Dec 02 20:35:25 crc kubenswrapper[4796]: I1202 20:35:25.586095 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77bb45a7-f635-4ce6-9404-10942e3301ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77bb45a7-f635-4ce6-9404-10942e3301ce" (UID: "77bb45a7-f635-4ce6-9404-10942e3301ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:35:25 crc kubenswrapper[4796]: I1202 20:35:25.592156 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77bb45a7-f635-4ce6-9404-10942e3301ce-kube-api-access-zz6w8" (OuterVolumeSpecName: "kube-api-access-zz6w8") pod "77bb45a7-f635-4ce6-9404-10942e3301ce" (UID: "77bb45a7-f635-4ce6-9404-10942e3301ce"). InnerVolumeSpecName "kube-api-access-zz6w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:35:25 crc kubenswrapper[4796]: I1202 20:35:25.687760 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz6w8\" (UniqueName: \"kubernetes.io/projected/77bb45a7-f635-4ce6-9404-10942e3301ce-kube-api-access-zz6w8\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:25 crc kubenswrapper[4796]: I1202 20:35:25.687811 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77bb45a7-f635-4ce6-9404-10942e3301ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.136102 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" event={"ID":"77bb45a7-f635-4ce6-9404-10942e3301ce","Type":"ContainerDied","Data":"2e07e7b9ed39bb9c1bcd01cbe6a06c8c2e644b8c12ae2c6ef82a1aa3ccf7b5d7"} Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.136500 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e07e7b9ed39bb9c1bcd01cbe6a06c8c2e644b8c12ae2c6ef82a1aa3ccf7b5d7" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.136184 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2f14-account-delete-4vw8t" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.146911 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6a5c8c2-e2e3-4636-bfd4-126009a482f1","Type":"ContainerStarted","Data":"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0"} Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.225760 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.152:9322/\": read tcp 10.217.0.2:54836->10.217.0.152:9322: read: connection reset by peer" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.226382 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.152:9322/\": dial tcp 10.217.0.152:9322: connect: connection refused" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.650287 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.709855 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-combined-ca-bundle\") pod \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.709929 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-logs\") pod \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.710086 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-config-data\") pod \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.710130 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx2f7\" (UniqueName: \"kubernetes.io/projected/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-kube-api-access-lx2f7\") pod \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\" (UID: \"ca397d3a-cbff-4faf-b80d-a6eab99fc47a\") " Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.711087 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-logs" (OuterVolumeSpecName: "logs") pod "ca397d3a-cbff-4faf-b80d-a6eab99fc47a" (UID: "ca397d3a-cbff-4faf-b80d-a6eab99fc47a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.715622 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-kube-api-access-lx2f7" (OuterVolumeSpecName: "kube-api-access-lx2f7") pod "ca397d3a-cbff-4faf-b80d-a6eab99fc47a" (UID: "ca397d3a-cbff-4faf-b80d-a6eab99fc47a"). InnerVolumeSpecName "kube-api-access-lx2f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.737546 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca397d3a-cbff-4faf-b80d-a6eab99fc47a" (UID: "ca397d3a-cbff-4faf-b80d-a6eab99fc47a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.776193 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-config-data" (OuterVolumeSpecName: "config-data") pod "ca397d3a-cbff-4faf-b80d-a6eab99fc47a" (UID: "ca397d3a-cbff-4faf-b80d-a6eab99fc47a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.801447 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.813469 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.813506 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx2f7\" (UniqueName: \"kubernetes.io/projected/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-kube-api-access-lx2f7\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.813522 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.813533 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca397d3a-cbff-4faf-b80d-a6eab99fc47a-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:26 crc kubenswrapper[4796]: E1202 20:35:26.813622 4796 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Dec 02 20:35:26 crc kubenswrapper[4796]: E1202 20:35:26.813688 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data podName:9f952e5c-16ef-4af8-9192-4888cf1ad0dc nodeName:}" failed. No retries permitted until 2025-12-02 20:35:30.813668684 +0000 UTC m=+1413.817044208 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data") pod "watcher-kuttl-api-0" (UID: "9f952e5c-16ef-4af8-9192-4888cf1ad0dc") : secret "watcher-kuttl-api-config-data" not found Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.914248 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data\") pod \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.914647 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-combined-ca-bundle\") pod \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.914708 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-custom-prometheus-ca\") pod \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.914799 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-logs\") pod \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.914834 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-internal-tls-certs\") pod \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.914976 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-public-tls-certs\") pod \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.915048 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq8nl\" (UniqueName: \"kubernetes.io/projected/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-kube-api-access-nq8nl\") pod \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\" (UID: \"9f952e5c-16ef-4af8-9192-4888cf1ad0dc\") " Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.915275 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-logs" (OuterVolumeSpecName: "logs") pod "9f952e5c-16ef-4af8-9192-4888cf1ad0dc" (UID: "9f952e5c-16ef-4af8-9192-4888cf1ad0dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.915508 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.925576 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-kube-api-access-nq8nl" (OuterVolumeSpecName: "kube-api-access-nq8nl") pod "9f952e5c-16ef-4af8-9192-4888cf1ad0dc" (UID: "9f952e5c-16ef-4af8-9192-4888cf1ad0dc"). InnerVolumeSpecName "kube-api-access-nq8nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.967075 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f952e5c-16ef-4af8-9192-4888cf1ad0dc" (UID: "9f952e5c-16ef-4af8-9192-4888cf1ad0dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.971333 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9f952e5c-16ef-4af8-9192-4888cf1ad0dc" (UID: "9f952e5c-16ef-4af8-9192-4888cf1ad0dc"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.973244 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data" (OuterVolumeSpecName: "config-data") pod "9f952e5c-16ef-4af8-9192-4888cf1ad0dc" (UID: "9f952e5c-16ef-4af8-9192-4888cf1ad0dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.973340 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9f952e5c-16ef-4af8-9192-4888cf1ad0dc" (UID: "9f952e5c-16ef-4af8-9192-4888cf1ad0dc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:26 crc kubenswrapper[4796]: I1202 20:35:26.979878 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9f952e5c-16ef-4af8-9192-4888cf1ad0dc" (UID: "9f952e5c-16ef-4af8-9192-4888cf1ad0dc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.017150 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.017194 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq8nl\" (UniqueName: \"kubernetes.io/projected/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-kube-api-access-nq8nl\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.017211 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.017225 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.017236 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.017248 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f952e5c-16ef-4af8-9192-4888cf1ad0dc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.137668 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-fhjl7"] Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.144205 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-fhjl7"] Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.159463 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-2f14-account-create-update-svtzs"] Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.161589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6a5c8c2-e2e3-4636-bfd4-126009a482f1","Type":"ContainerStarted","Data":"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5"} Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.164121 4796 generic.go:334] "Generic (PLEG): container finished" podID="ca397d3a-cbff-4faf-b80d-a6eab99fc47a" containerID="f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38" exitCode=0 Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.164285 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ca397d3a-cbff-4faf-b80d-a6eab99fc47a","Type":"ContainerDied","Data":"f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38"} Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.164353 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ca397d3a-cbff-4faf-b80d-a6eab99fc47a","Type":"ContainerDied","Data":"2a7dc99429d5e42c3dc1a76551401d85ad095a6baab6ea2f5f73e979442a3c84"} Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.164387 4796 scope.go:117] "RemoveContainer" containerID="f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.164622 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.169638 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher2f14-account-delete-4vw8t"] Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.171493 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-2f14-account-create-update-svtzs"] Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.172963 4796 generic.go:334] "Generic (PLEG): container finished" podID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerID="0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295" exitCode=0 Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.173006 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9f952e5c-16ef-4af8-9192-4888cf1ad0dc","Type":"ContainerDied","Data":"0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295"} Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.173034 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9f952e5c-16ef-4af8-9192-4888cf1ad0dc","Type":"ContainerDied","Data":"944cc74de93d04d5eb4358004c2f07d1328ec7f7f461882197a3a6edb5f496f9"} Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.173102 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.176705 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher2f14-account-delete-4vw8t"] Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.191976 4796 scope.go:117] "RemoveContainer" containerID="f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38" Dec 02 20:35:27 crc kubenswrapper[4796]: E1202 20:35:27.192439 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38\": container with ID starting with f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38 not found: ID does not exist" containerID="f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.192471 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38"} err="failed to get container status \"f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38\": rpc error: code = NotFound desc = could not find container \"f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38\": container with ID starting with f88eeb51e39bfb91504e41120735486f46a481df42fe31b6d03aab7911b57e38 not found: ID does not exist" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.192494 4796 scope.go:117] "RemoveContainer" containerID="0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.213373 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.223675 4796 scope.go:117] "RemoveContainer" containerID="7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.226186 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.233448 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.239666 4796 scope.go:117] "RemoveContainer" containerID="0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295" Dec 02 20:35:27 crc kubenswrapper[4796]: E1202 20:35:27.240136 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295\": container with ID starting with 0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295 not found: ID does not exist" containerID="0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.240168 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295"} err="failed to get container status \"0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295\": rpc error: code = NotFound desc = could not find container \"0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295\": container with ID starting with 0eba6d85ff22cf8b27f35d261a58bfcd3a696fe12c2c0b4f9179d28ad5db8295 not found: ID does not exist" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.240193 4796 scope.go:117] "RemoveContainer" containerID="7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d" Dec 02 20:35:27 crc kubenswrapper[4796]: E1202 20:35:27.240636 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d\": container with ID starting with 7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d not found: ID does not exist" containerID="7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.240685 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d"} err="failed to get container status \"7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d\": rpc error: code = NotFound desc = could not find container \"7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d\": container with ID starting with 7c5c1236a4a527df008706d8cb6411457cb411bdf19de937162f5fba6d356c6d not found: ID does not exist" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.243039 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.279757 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5365ca9d-04f4-41cf-b834-8d1f753d1ab4" path="/var/lib/kubelet/pods/5365ca9d-04f4-41cf-b834-8d1f753d1ab4/volumes" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.280535 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77bb45a7-f635-4ce6-9404-10942e3301ce" path="/var/lib/kubelet/pods/77bb45a7-f635-4ce6-9404-10942e3301ce/volumes" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.281123 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" path="/var/lib/kubelet/pods/9f952e5c-16ef-4af8-9192-4888cf1ad0dc/volumes" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.282237 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b5866d-0d4b-4954-8ebb-3e1e339bf32d" path="/var/lib/kubelet/pods/a8b5866d-0d4b-4954-8ebb-3e1e339bf32d/volumes" Dec 02 20:35:27 crc kubenswrapper[4796]: I1202 20:35:27.282904 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca397d3a-cbff-4faf-b80d-a6eab99fc47a" path="/var/lib/kubelet/pods/ca397d3a-cbff-4faf-b80d-a6eab99fc47a/volumes" Dec 02 20:35:29 crc kubenswrapper[4796]: I1202 20:35:29.197346 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6a5c8c2-e2e3-4636-bfd4-126009a482f1","Type":"ContainerStarted","Data":"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14"} Dec 02 20:35:29 crc kubenswrapper[4796]: I1202 20:35:29.197713 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:29 crc kubenswrapper[4796]: I1202 20:35:29.197512 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="ceilometer-notification-agent" containerID="cri-o://cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0" gracePeriod=30 Dec 02 20:35:29 crc kubenswrapper[4796]: I1202 20:35:29.197461 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="ceilometer-central-agent" containerID="cri-o://384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7" gracePeriod=30 Dec 02 20:35:29 crc kubenswrapper[4796]: I1202 20:35:29.197517 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="sg-core" containerID="cri-o://29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5" gracePeriod=30 Dec 02 20:35:29 crc kubenswrapper[4796]: I1202 20:35:29.197571 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="proxy-httpd" containerID="cri-o://26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14" gracePeriod=30 Dec 02 20:35:29 crc kubenswrapper[4796]: I1202 20:35:29.234762 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.245454652 podStartE2EDuration="6.234734363s" podCreationTimestamp="2025-12-02 20:35:23 +0000 UTC" firstStartedPulling="2025-12-02 20:35:24.011753315 +0000 UTC m=+1407.015128849" lastFinishedPulling="2025-12-02 20:35:28.001033006 +0000 UTC m=+1411.004408560" observedRunningTime="2025-12-02 20:35:29.227164599 +0000 UTC m=+1412.230540133" watchObservedRunningTime="2025-12-02 20:35:29.234734363 +0000 UTC m=+1412.238109897" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.073864 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.170638 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-log-httpd\") pod \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.170700 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzn55\" (UniqueName: \"kubernetes.io/projected/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-kube-api-access-rzn55\") pod \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.170777 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-config-data\") pod \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.170810 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-scripts\") pod \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.170925 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-combined-ca-bundle\") pod \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.170980 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-run-httpd\") pod \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.171007 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-sg-core-conf-yaml\") pod \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.171793 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-ceilometer-tls-certs\") pod \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\" (UID: \"c6a5c8c2-e2e3-4636-bfd4-126009a482f1\") " Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.171270 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c6a5c8c2-e2e3-4636-bfd4-126009a482f1" (UID: "c6a5c8c2-e2e3-4636-bfd4-126009a482f1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.171395 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c6a5c8c2-e2e3-4636-bfd4-126009a482f1" (UID: "c6a5c8c2-e2e3-4636-bfd4-126009a482f1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.172321 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.172343 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.177586 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-scripts" (OuterVolumeSpecName: "scripts") pod "c6a5c8c2-e2e3-4636-bfd4-126009a482f1" (UID: "c6a5c8c2-e2e3-4636-bfd4-126009a482f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.178676 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-kube-api-access-rzn55" (OuterVolumeSpecName: "kube-api-access-rzn55") pod "c6a5c8c2-e2e3-4636-bfd4-126009a482f1" (UID: "c6a5c8c2-e2e3-4636-bfd4-126009a482f1"). InnerVolumeSpecName "kube-api-access-rzn55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.203268 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6a5c8c2-e2e3-4636-bfd4-126009a482f1" (UID: "c6a5c8c2-e2e3-4636-bfd4-126009a482f1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.209436 4796 generic.go:334] "Generic (PLEG): container finished" podID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerID="26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14" exitCode=0 Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.209477 4796 generic.go:334] "Generic (PLEG): container finished" podID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerID="29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5" exitCode=2 Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.209488 4796 generic.go:334] "Generic (PLEG): container finished" podID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerID="cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0" exitCode=0 Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.209497 4796 generic.go:334] "Generic (PLEG): container finished" podID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerID="384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7" exitCode=0 Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.209523 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6a5c8c2-e2e3-4636-bfd4-126009a482f1","Type":"ContainerDied","Data":"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14"} Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.209558 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6a5c8c2-e2e3-4636-bfd4-126009a482f1","Type":"ContainerDied","Data":"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5"} Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.209571 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6a5c8c2-e2e3-4636-bfd4-126009a482f1","Type":"ContainerDied","Data":"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0"} Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.209583 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6a5c8c2-e2e3-4636-bfd4-126009a482f1","Type":"ContainerDied","Data":"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7"} Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.209597 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6a5c8c2-e2e3-4636-bfd4-126009a482f1","Type":"ContainerDied","Data":"0fd8bd8ac18511c9876e6a487bcf1f6f226b462a2217cf1454e2b58761201b7b"} Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.209620 4796 scope.go:117] "RemoveContainer" containerID="26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.209791 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.223675 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c6a5c8c2-e2e3-4636-bfd4-126009a482f1" (UID: "c6a5c8c2-e2e3-4636-bfd4-126009a482f1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.239131 4796 scope.go:117] "RemoveContainer" containerID="29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.259191 4796 scope.go:117] "RemoveContainer" containerID="cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.263458 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6a5c8c2-e2e3-4636-bfd4-126009a482f1" (UID: "c6a5c8c2-e2e3-4636-bfd4-126009a482f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.274667 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.274714 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.274727 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.274741 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzn55\" (UniqueName: \"kubernetes.io/projected/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-kube-api-access-rzn55\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.274755 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.279447 4796 scope.go:117] "RemoveContainer" containerID="384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.282303 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-config-data" (OuterVolumeSpecName: "config-data") pod "c6a5c8c2-e2e3-4636-bfd4-126009a482f1" (UID: "c6a5c8c2-e2e3-4636-bfd4-126009a482f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.301626 4796 scope.go:117] "RemoveContainer" containerID="26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14" Dec 02 20:35:30 crc kubenswrapper[4796]: E1202 20:35:30.303131 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14\": container with ID starting with 26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14 not found: ID does not exist" containerID="26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.303280 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14"} err="failed to get container status \"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14\": rpc error: code = NotFound desc = could not find container \"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14\": container with ID starting with 26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.303382 4796 scope.go:117] "RemoveContainer" containerID="29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5" Dec 02 20:35:30 crc kubenswrapper[4796]: E1202 20:35:30.303882 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5\": container with ID starting with 29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5 not found: ID does not exist" containerID="29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.303918 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5"} err="failed to get container status \"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5\": rpc error: code = NotFound desc = could not find container \"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5\": container with ID starting with 29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.303939 4796 scope.go:117] "RemoveContainer" containerID="cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0" Dec 02 20:35:30 crc kubenswrapper[4796]: E1202 20:35:30.304210 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0\": container with ID starting with cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0 not found: ID does not exist" containerID="cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.304234 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0"} err="failed to get container status \"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0\": rpc error: code = NotFound desc = could not find container \"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0\": container with ID starting with cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.304271 4796 scope.go:117] "RemoveContainer" containerID="384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7" Dec 02 20:35:30 crc kubenswrapper[4796]: E1202 20:35:30.304522 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7\": container with ID starting with 384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7 not found: ID does not exist" containerID="384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.304543 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7"} err="failed to get container status \"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7\": rpc error: code = NotFound desc = could not find container \"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7\": container with ID starting with 384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.304560 4796 scope.go:117] "RemoveContainer" containerID="26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.304780 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14"} err="failed to get container status \"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14\": rpc error: code = NotFound desc = could not find container \"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14\": container with ID starting with 26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.304798 4796 scope.go:117] "RemoveContainer" containerID="29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.304960 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5"} err="failed to get container status \"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5\": rpc error: code = NotFound desc = could not find container \"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5\": container with ID starting with 29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.304982 4796 scope.go:117] "RemoveContainer" containerID="cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.305149 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0"} err="failed to get container status \"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0\": rpc error: code = NotFound desc = could not find container \"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0\": container with ID starting with cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.305168 4796 scope.go:117] "RemoveContainer" containerID="384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.305451 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7"} err="failed to get container status \"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7\": rpc error: code = NotFound desc = could not find container \"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7\": container with ID starting with 384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.305470 4796 scope.go:117] "RemoveContainer" containerID="26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.305633 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14"} err="failed to get container status \"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14\": rpc error: code = NotFound desc = could not find container \"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14\": container with ID starting with 26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.305650 4796 scope.go:117] "RemoveContainer" containerID="29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.305796 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5"} err="failed to get container status \"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5\": rpc error: code = NotFound desc = could not find container \"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5\": container with ID starting with 29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.305811 4796 scope.go:117] "RemoveContainer" containerID="cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.305956 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0"} err="failed to get container status \"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0\": rpc error: code = NotFound desc = could not find container \"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0\": container with ID starting with cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.305973 4796 scope.go:117] "RemoveContainer" containerID="384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.306117 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7"} err="failed to get container status \"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7\": rpc error: code = NotFound desc = could not find container \"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7\": container with ID starting with 384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.306133 4796 scope.go:117] "RemoveContainer" containerID="26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.306299 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14"} err="failed to get container status \"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14\": rpc error: code = NotFound desc = could not find container \"26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14\": container with ID starting with 26e4a0ac322b81f76860bfa7110f8faf7ee5bed72ec50b08786bb22c6926ec14 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.306319 4796 scope.go:117] "RemoveContainer" containerID="29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.306604 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5"} err="failed to get container status \"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5\": rpc error: code = NotFound desc = could not find container \"29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5\": container with ID starting with 29034216d035b3404d42a9e03bdb8a5e3f6b9730c90e0c1e0f7526c33c8a42b5 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.306623 4796 scope.go:117] "RemoveContainer" containerID="cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.307003 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0"} err="failed to get container status \"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0\": rpc error: code = NotFound desc = could not find container \"cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0\": container with ID starting with cebba8744ddf888fad24285f534ff72c3ca3c0a6e13ff2aaa44b8da10023ced0 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.307023 4796 scope.go:117] "RemoveContainer" containerID="384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.307422 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7"} err="failed to get container status \"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7\": rpc error: code = NotFound desc = could not find container \"384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7\": container with ID starting with 384b3e1de5e744f9940333e3c8b95398a2b0e9159e885cacb0fb2db6534879a7 not found: ID does not exist" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.376343 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a5c8c2-e2e3-4636-bfd4-126009a482f1-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.554464 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.574730 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.584593 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:30 crc kubenswrapper[4796]: E1202 20:35:30.585311 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerName="watcher-api" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.585410 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerName="watcher-api" Dec 02 20:35:30 crc kubenswrapper[4796]: E1202 20:35:30.585507 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="ceilometer-central-agent" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.585577 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="ceilometer-central-agent" Dec 02 20:35:30 crc kubenswrapper[4796]: E1202 20:35:30.585694 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="ceilometer-notification-agent" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.585765 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="ceilometer-notification-agent" Dec 02 20:35:30 crc kubenswrapper[4796]: E1202 20:35:30.585841 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="sg-core" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.585906 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="sg-core" Dec 02 20:35:30 crc kubenswrapper[4796]: E1202 20:35:30.585986 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca397d3a-cbff-4faf-b80d-a6eab99fc47a" containerName="watcher-applier" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.586053 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca397d3a-cbff-4faf-b80d-a6eab99fc47a" containerName="watcher-applier" Dec 02 20:35:30 crc kubenswrapper[4796]: E1202 20:35:30.586126 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="proxy-httpd" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.586194 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="proxy-httpd" Dec 02 20:35:30 crc kubenswrapper[4796]: E1202 20:35:30.586287 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerName="watcher-kuttl-api-log" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.586360 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerName="watcher-kuttl-api-log" Dec 02 20:35:30 crc kubenswrapper[4796]: E1202 20:35:30.586433 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77bb45a7-f635-4ce6-9404-10942e3301ce" containerName="mariadb-account-delete" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.586514 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="77bb45a7-f635-4ce6-9404-10942e3301ce" containerName="mariadb-account-delete" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.586765 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="ceilometer-central-agent" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.586856 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="proxy-httpd" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.586929 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="77bb45a7-f635-4ce6-9404-10942e3301ce" containerName="mariadb-account-delete" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.587009 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="ceilometer-notification-agent" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.587192 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" containerName="sg-core" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.587307 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerName="watcher-api" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.587396 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f952e5c-16ef-4af8-9192-4888cf1ad0dc" containerName="watcher-kuttl-api-log" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.587482 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca397d3a-cbff-4faf-b80d-a6eab99fc47a" containerName="watcher-applier" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.589432 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.591928 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.592140 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.592318 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.596471 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.680963 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/214f3adf-67a1-48bf-a163-3e9f45139e88-run-httpd\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.681077 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-scripts\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.681325 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.681444 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/214f3adf-67a1-48bf-a163-3e9f45139e88-log-httpd\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.681542 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvsv5\" (UniqueName: \"kubernetes.io/projected/214f3adf-67a1-48bf-a163-3e9f45139e88-kube-api-access-rvsv5\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.681603 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.681717 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.681777 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-config-data\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.783115 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-scripts\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.783481 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.783523 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/214f3adf-67a1-48bf-a163-3e9f45139e88-log-httpd\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.783557 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvsv5\" (UniqueName: \"kubernetes.io/projected/214f3adf-67a1-48bf-a163-3e9f45139e88-kube-api-access-rvsv5\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.783583 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.783620 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.783647 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-config-data\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.783678 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/214f3adf-67a1-48bf-a163-3e9f45139e88-run-httpd\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.784158 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/214f3adf-67a1-48bf-a163-3e9f45139e88-run-httpd\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.785069 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/214f3adf-67a1-48bf-a163-3e9f45139e88-log-httpd\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.790392 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.796901 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.797802 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.799582 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-scripts\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.800324 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-config-data\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.804862 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvsv5\" (UniqueName: \"kubernetes.io/projected/214f3adf-67a1-48bf-a163-3e9f45139e88-kube-api-access-rvsv5\") pod \"ceilometer-0\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:30 crc kubenswrapper[4796]: I1202 20:35:30.924161 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:31 crc kubenswrapper[4796]: I1202 20:35:31.275449 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a5c8c2-e2e3-4636-bfd4-126009a482f1" path="/var/lib/kubelet/pods/c6a5c8c2-e2e3-4636-bfd4-126009a482f1/volumes" Dec 02 20:35:31 crc kubenswrapper[4796]: I1202 20:35:31.400880 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.233059 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"214f3adf-67a1-48bf-a163-3e9f45139e88","Type":"ContainerStarted","Data":"64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3"} Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.233667 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"214f3adf-67a1-48bf-a163-3e9f45139e88","Type":"ContainerStarted","Data":"25f4314e5a94bb9d395928a9305073e0deec533e70e0980655cb70d74474b05e"} Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.864428 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.924333 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-logs\") pod \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.925136 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-logs" (OuterVolumeSpecName: "logs") pod "1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80" (UID: "1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.925301 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-config-data\") pod \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.925849 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsmd9\" (UniqueName: \"kubernetes.io/projected/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-kube-api-access-fsmd9\") pod \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.925987 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-custom-prometheus-ca\") pod \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.926499 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-combined-ca-bundle\") pod \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\" (UID: \"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80\") " Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.927150 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.934096 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-kube-api-access-fsmd9" (OuterVolumeSpecName: "kube-api-access-fsmd9") pod "1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80" (UID: "1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80"). InnerVolumeSpecName "kube-api-access-fsmd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.969514 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80" (UID: "1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.984590 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80" (UID: "1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:32 crc kubenswrapper[4796]: I1202 20:35:32.994425 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-config-data" (OuterVolumeSpecName: "config-data") pod "1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80" (UID: "1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.029223 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.029276 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsmd9\" (UniqueName: \"kubernetes.io/projected/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-kube-api-access-fsmd9\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.029293 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.029305 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.243032 4796 generic.go:334] "Generic (PLEG): container finished" podID="1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80" containerID="db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472" exitCode=0 Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.243100 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.243102 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80","Type":"ContainerDied","Data":"db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472"} Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.243137 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80","Type":"ContainerDied","Data":"99f22147e861db4d05261d663cda26d417bde798684203584ed7362c4dd2c5ff"} Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.243153 4796 scope.go:117] "RemoveContainer" containerID="db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.245775 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"214f3adf-67a1-48bf-a163-3e9f45139e88","Type":"ContainerStarted","Data":"1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241"} Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.261340 4796 scope.go:117] "RemoveContainer" containerID="db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472" Dec 02 20:35:33 crc kubenswrapper[4796]: E1202 20:35:33.261812 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472\": container with ID starting with db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472 not found: ID does not exist" containerID="db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.261850 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472"} err="failed to get container status \"db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472\": rpc error: code = NotFound desc = could not find container \"db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472\": container with ID starting with db671afd412dea6ad0e3f2b7108707a4ec4fc39368de0d12bff4333a3a262472 not found: ID does not exist" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.279287 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.290416 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.813284 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-4kmtk"] Dec 02 20:35:33 crc kubenswrapper[4796]: E1202 20:35:33.813855 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80" containerName="watcher-decision-engine" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.813868 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80" containerName="watcher-decision-engine" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.814021 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80" containerName="watcher-decision-engine" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.814596 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-4kmtk" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.834077 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-4kmtk"] Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.845582 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3f97d0-7913-47ba-a3a8-e48c0c941aec-operator-scripts\") pod \"watcher-db-create-4kmtk\" (UID: \"3e3f97d0-7913-47ba-a3a8-e48c0c941aec\") " pod="watcher-kuttl-default/watcher-db-create-4kmtk" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.845635 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t57l\" (UniqueName: \"kubernetes.io/projected/3e3f97d0-7913-47ba-a3a8-e48c0c941aec-kube-api-access-6t57l\") pod \"watcher-db-create-4kmtk\" (UID: \"3e3f97d0-7913-47ba-a3a8-e48c0c941aec\") " pod="watcher-kuttl-default/watcher-db-create-4kmtk" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.919108 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-be33-account-create-update-flhzh"] Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.920327 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.927706 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.944597 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-be33-account-create-update-flhzh"] Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.947137 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f816cdb8-5959-4df2-942d-cd6f7d47557a-operator-scripts\") pod \"watcher-be33-account-create-update-flhzh\" (UID: \"f816cdb8-5959-4df2-942d-cd6f7d47557a\") " pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.947521 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rszp\" (UniqueName: \"kubernetes.io/projected/f816cdb8-5959-4df2-942d-cd6f7d47557a-kube-api-access-6rszp\") pod \"watcher-be33-account-create-update-flhzh\" (UID: \"f816cdb8-5959-4df2-942d-cd6f7d47557a\") " pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.948010 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3f97d0-7913-47ba-a3a8-e48c0c941aec-operator-scripts\") pod \"watcher-db-create-4kmtk\" (UID: \"3e3f97d0-7913-47ba-a3a8-e48c0c941aec\") " pod="watcher-kuttl-default/watcher-db-create-4kmtk" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.948080 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t57l\" (UniqueName: \"kubernetes.io/projected/3e3f97d0-7913-47ba-a3a8-e48c0c941aec-kube-api-access-6t57l\") pod \"watcher-db-create-4kmtk\" (UID: \"3e3f97d0-7913-47ba-a3a8-e48c0c941aec\") " pod="watcher-kuttl-default/watcher-db-create-4kmtk" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.948824 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3f97d0-7913-47ba-a3a8-e48c0c941aec-operator-scripts\") pod \"watcher-db-create-4kmtk\" (UID: \"3e3f97d0-7913-47ba-a3a8-e48c0c941aec\") " pod="watcher-kuttl-default/watcher-db-create-4kmtk" Dec 02 20:35:33 crc kubenswrapper[4796]: I1202 20:35:33.981401 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t57l\" (UniqueName: \"kubernetes.io/projected/3e3f97d0-7913-47ba-a3a8-e48c0c941aec-kube-api-access-6t57l\") pod \"watcher-db-create-4kmtk\" (UID: \"3e3f97d0-7913-47ba-a3a8-e48c0c941aec\") " pod="watcher-kuttl-default/watcher-db-create-4kmtk" Dec 02 20:35:34 crc kubenswrapper[4796]: I1202 20:35:34.049835 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f816cdb8-5959-4df2-942d-cd6f7d47557a-operator-scripts\") pod \"watcher-be33-account-create-update-flhzh\" (UID: \"f816cdb8-5959-4df2-942d-cd6f7d47557a\") " pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" Dec 02 20:35:34 crc kubenswrapper[4796]: I1202 20:35:34.050198 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rszp\" (UniqueName: \"kubernetes.io/projected/f816cdb8-5959-4df2-942d-cd6f7d47557a-kube-api-access-6rszp\") pod \"watcher-be33-account-create-update-flhzh\" (UID: \"f816cdb8-5959-4df2-942d-cd6f7d47557a\") " pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" Dec 02 20:35:34 crc kubenswrapper[4796]: I1202 20:35:34.050672 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f816cdb8-5959-4df2-942d-cd6f7d47557a-operator-scripts\") pod \"watcher-be33-account-create-update-flhzh\" (UID: \"f816cdb8-5959-4df2-942d-cd6f7d47557a\") " pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" Dec 02 20:35:34 crc kubenswrapper[4796]: I1202 20:35:34.078616 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rszp\" (UniqueName: \"kubernetes.io/projected/f816cdb8-5959-4df2-942d-cd6f7d47557a-kube-api-access-6rszp\") pod \"watcher-be33-account-create-update-flhzh\" (UID: \"f816cdb8-5959-4df2-942d-cd6f7d47557a\") " pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" Dec 02 20:35:34 crc kubenswrapper[4796]: I1202 20:35:34.130573 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-4kmtk" Dec 02 20:35:34 crc kubenswrapper[4796]: I1202 20:35:34.236410 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" Dec 02 20:35:34 crc kubenswrapper[4796]: I1202 20:35:34.277631 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"214f3adf-67a1-48bf-a163-3e9f45139e88","Type":"ContainerStarted","Data":"619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1"} Dec 02 20:35:34 crc kubenswrapper[4796]: I1202 20:35:34.614827 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-4kmtk"] Dec 02 20:35:34 crc kubenswrapper[4796]: I1202 20:35:34.797404 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-be33-account-create-update-flhzh"] Dec 02 20:35:34 crc kubenswrapper[4796]: W1202 20:35:34.807307 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf816cdb8_5959_4df2_942d_cd6f7d47557a.slice/crio-2878ca8ebb27b7370bd3d6b157d0b81f9017ddadb5be87f5edc9de1d5099fbcb WatchSource:0}: Error finding container 2878ca8ebb27b7370bd3d6b157d0b81f9017ddadb5be87f5edc9de1d5099fbcb: Status 404 returned error can't find the container with id 2878ca8ebb27b7370bd3d6b157d0b81f9017ddadb5be87f5edc9de1d5099fbcb Dec 02 20:35:35 crc kubenswrapper[4796]: I1202 20:35:35.278022 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80" path="/var/lib/kubelet/pods/1f0bd83e-fe69-4c53-8c68-dd61c3d8ea80/volumes" Dec 02 20:35:35 crc kubenswrapper[4796]: I1202 20:35:35.289733 4796 generic.go:334] "Generic (PLEG): container finished" podID="3e3f97d0-7913-47ba-a3a8-e48c0c941aec" containerID="914dcb19a1c7df55747d4d9a8640809ad767d23ef85d05944ddc8a4501658aa3" exitCode=0 Dec 02 20:35:35 crc kubenswrapper[4796]: I1202 20:35:35.289803 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-4kmtk" event={"ID":"3e3f97d0-7913-47ba-a3a8-e48c0c941aec","Type":"ContainerDied","Data":"914dcb19a1c7df55747d4d9a8640809ad767d23ef85d05944ddc8a4501658aa3"} Dec 02 20:35:35 crc kubenswrapper[4796]: I1202 20:35:35.289834 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-4kmtk" event={"ID":"3e3f97d0-7913-47ba-a3a8-e48c0c941aec","Type":"ContainerStarted","Data":"ed791fdcf363acdbcd58c31a277f5b5fec1a4bd4542478fc4f7094538f17477b"} Dec 02 20:35:35 crc kubenswrapper[4796]: I1202 20:35:35.295409 4796 generic.go:334] "Generic (PLEG): container finished" podID="f816cdb8-5959-4df2-942d-cd6f7d47557a" containerID="d087025674c71fedb8e07e48cc3f4e950071f11f9dea1e33bab7b3739bb1585b" exitCode=0 Dec 02 20:35:35 crc kubenswrapper[4796]: I1202 20:35:35.295449 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" event={"ID":"f816cdb8-5959-4df2-942d-cd6f7d47557a","Type":"ContainerDied","Data":"d087025674c71fedb8e07e48cc3f4e950071f11f9dea1e33bab7b3739bb1585b"} Dec 02 20:35:35 crc kubenswrapper[4796]: I1202 20:35:35.295472 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" event={"ID":"f816cdb8-5959-4df2-942d-cd6f7d47557a","Type":"ContainerStarted","Data":"2878ca8ebb27b7370bd3d6b157d0b81f9017ddadb5be87f5edc9de1d5099fbcb"} Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.309081 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"214f3adf-67a1-48bf-a163-3e9f45139e88","Type":"ContainerStarted","Data":"65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605"} Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.353139 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.670147187 podStartE2EDuration="6.353115505s" podCreationTimestamp="2025-12-02 20:35:30 +0000 UTC" firstStartedPulling="2025-12-02 20:35:31.416737941 +0000 UTC m=+1414.420113515" lastFinishedPulling="2025-12-02 20:35:35.099706309 +0000 UTC m=+1418.103081833" observedRunningTime="2025-12-02 20:35:36.349272811 +0000 UTC m=+1419.352648345" watchObservedRunningTime="2025-12-02 20:35:36.353115505 +0000 UTC m=+1419.356491039" Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.725585 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-4kmtk" Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.808516 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3f97d0-7913-47ba-a3a8-e48c0c941aec-operator-scripts\") pod \"3e3f97d0-7913-47ba-a3a8-e48c0c941aec\" (UID: \"3e3f97d0-7913-47ba-a3a8-e48c0c941aec\") " Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.808668 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t57l\" (UniqueName: \"kubernetes.io/projected/3e3f97d0-7913-47ba-a3a8-e48c0c941aec-kube-api-access-6t57l\") pod \"3e3f97d0-7913-47ba-a3a8-e48c0c941aec\" (UID: \"3e3f97d0-7913-47ba-a3a8-e48c0c941aec\") " Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.809433 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e3f97d0-7913-47ba-a3a8-e48c0c941aec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e3f97d0-7913-47ba-a3a8-e48c0c941aec" (UID: "3e3f97d0-7913-47ba-a3a8-e48c0c941aec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.814799 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.817851 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3f97d0-7913-47ba-a3a8-e48c0c941aec-kube-api-access-6t57l" (OuterVolumeSpecName: "kube-api-access-6t57l") pod "3e3f97d0-7913-47ba-a3a8-e48c0c941aec" (UID: "3e3f97d0-7913-47ba-a3a8-e48c0c941aec"). InnerVolumeSpecName "kube-api-access-6t57l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.909670 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f816cdb8-5959-4df2-942d-cd6f7d47557a-operator-scripts\") pod \"f816cdb8-5959-4df2-942d-cd6f7d47557a\" (UID: \"f816cdb8-5959-4df2-942d-cd6f7d47557a\") " Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.909883 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rszp\" (UniqueName: \"kubernetes.io/projected/f816cdb8-5959-4df2-942d-cd6f7d47557a-kube-api-access-6rszp\") pod \"f816cdb8-5959-4df2-942d-cd6f7d47557a\" (UID: \"f816cdb8-5959-4df2-942d-cd6f7d47557a\") " Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.910212 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t57l\" (UniqueName: \"kubernetes.io/projected/3e3f97d0-7913-47ba-a3a8-e48c0c941aec-kube-api-access-6t57l\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.910237 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3f97d0-7913-47ba-a3a8-e48c0c941aec-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.911035 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f816cdb8-5959-4df2-942d-cd6f7d47557a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f816cdb8-5959-4df2-942d-cd6f7d47557a" (UID: "f816cdb8-5959-4df2-942d-cd6f7d47557a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:35:36 crc kubenswrapper[4796]: I1202 20:35:36.917559 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f816cdb8-5959-4df2-942d-cd6f7d47557a-kube-api-access-6rszp" (OuterVolumeSpecName: "kube-api-access-6rszp") pod "f816cdb8-5959-4df2-942d-cd6f7d47557a" (UID: "f816cdb8-5959-4df2-942d-cd6f7d47557a"). InnerVolumeSpecName "kube-api-access-6rszp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:35:37 crc kubenswrapper[4796]: I1202 20:35:37.012071 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rszp\" (UniqueName: \"kubernetes.io/projected/f816cdb8-5959-4df2-942d-cd6f7d47557a-kube-api-access-6rszp\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:37 crc kubenswrapper[4796]: I1202 20:35:37.012119 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f816cdb8-5959-4df2-942d-cd6f7d47557a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:37 crc kubenswrapper[4796]: I1202 20:35:37.350686 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" event={"ID":"f816cdb8-5959-4df2-942d-cd6f7d47557a","Type":"ContainerDied","Data":"2878ca8ebb27b7370bd3d6b157d0b81f9017ddadb5be87f5edc9de1d5099fbcb"} Dec 02 20:35:37 crc kubenswrapper[4796]: I1202 20:35:37.350757 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2878ca8ebb27b7370bd3d6b157d0b81f9017ddadb5be87f5edc9de1d5099fbcb" Dec 02 20:35:37 crc kubenswrapper[4796]: I1202 20:35:37.350867 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-be33-account-create-update-flhzh" Dec 02 20:35:37 crc kubenswrapper[4796]: I1202 20:35:37.368125 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-4kmtk" Dec 02 20:35:37 crc kubenswrapper[4796]: I1202 20:35:37.368450 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-4kmtk" event={"ID":"3e3f97d0-7913-47ba-a3a8-e48c0c941aec","Type":"ContainerDied","Data":"ed791fdcf363acdbcd58c31a277f5b5fec1a4bd4542478fc4f7094538f17477b"} Dec 02 20:35:37 crc kubenswrapper[4796]: I1202 20:35:37.368554 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed791fdcf363acdbcd58c31a277f5b5fec1a4bd4542478fc4f7094538f17477b" Dec 02 20:35:37 crc kubenswrapper[4796]: I1202 20:35:37.368626 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.352774 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pvj29"] Dec 02 20:35:39 crc kubenswrapper[4796]: E1202 20:35:39.353526 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f816cdb8-5959-4df2-942d-cd6f7d47557a" containerName="mariadb-account-create-update" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.353544 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f816cdb8-5959-4df2-942d-cd6f7d47557a" containerName="mariadb-account-create-update" Dec 02 20:35:39 crc kubenswrapper[4796]: E1202 20:35:39.353579 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3f97d0-7913-47ba-a3a8-e48c0c941aec" containerName="mariadb-database-create" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.353589 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3f97d0-7913-47ba-a3a8-e48c0c941aec" containerName="mariadb-database-create" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.353788 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f816cdb8-5959-4df2-942d-cd6f7d47557a" containerName="mariadb-account-create-update" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.353816 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3f97d0-7913-47ba-a3a8-e48c0c941aec" containerName="mariadb-database-create" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.354836 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.361692 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.362217 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-2dwhf" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.372416 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pvj29"] Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.458060 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-pvj29\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.458150 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-config-data\") pod \"watcher-kuttl-db-sync-pvj29\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.458207 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kp7h\" (UniqueName: \"kubernetes.io/projected/f678676b-e117-40a0-a5fa-7038047e75d5-kube-api-access-6kp7h\") pod \"watcher-kuttl-db-sync-pvj29\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.458289 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-db-sync-config-data\") pod \"watcher-kuttl-db-sync-pvj29\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.559607 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-pvj29\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.560024 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-config-data\") pod \"watcher-kuttl-db-sync-pvj29\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.560165 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp7h\" (UniqueName: \"kubernetes.io/projected/f678676b-e117-40a0-a5fa-7038047e75d5-kube-api-access-6kp7h\") pod \"watcher-kuttl-db-sync-pvj29\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.560352 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-db-sync-config-data\") pod \"watcher-kuttl-db-sync-pvj29\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.567103 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-pvj29\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.570102 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-config-data\") pod \"watcher-kuttl-db-sync-pvj29\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.579396 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-db-sync-config-data\") pod \"watcher-kuttl-db-sync-pvj29\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.582291 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kp7h\" (UniqueName: \"kubernetes.io/projected/f678676b-e117-40a0-a5fa-7038047e75d5-kube-api-access-6kp7h\") pod \"watcher-kuttl-db-sync-pvj29\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:39 crc kubenswrapper[4796]: I1202 20:35:39.681080 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:40 crc kubenswrapper[4796]: I1202 20:35:40.175823 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pvj29"] Dec 02 20:35:40 crc kubenswrapper[4796]: I1202 20:35:40.419483 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" event={"ID":"f678676b-e117-40a0-a5fa-7038047e75d5","Type":"ContainerStarted","Data":"a3d755ec7e2403ce841049ef585b88f6cbc4908275411093c0e9befccee621e7"} Dec 02 20:35:40 crc kubenswrapper[4796]: I1202 20:35:40.419934 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" event={"ID":"f678676b-e117-40a0-a5fa-7038047e75d5","Type":"ContainerStarted","Data":"87c334275b3dbf27e9fe078ca46ecc0a2aab08f97519e32ce1157ca34406a276"} Dec 02 20:35:40 crc kubenswrapper[4796]: I1202 20:35:40.437942 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" podStartSLOduration=1.437929144 podStartE2EDuration="1.437929144s" podCreationTimestamp="2025-12-02 20:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:35:40.435156057 +0000 UTC m=+1423.438531591" watchObservedRunningTime="2025-12-02 20:35:40.437929144 +0000 UTC m=+1423.441304678" Dec 02 20:35:43 crc kubenswrapper[4796]: I1202 20:35:43.449664 4796 generic.go:334] "Generic (PLEG): container finished" podID="f678676b-e117-40a0-a5fa-7038047e75d5" containerID="a3d755ec7e2403ce841049ef585b88f6cbc4908275411093c0e9befccee621e7" exitCode=0 Dec 02 20:35:43 crc kubenswrapper[4796]: I1202 20:35:43.449766 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" event={"ID":"f678676b-e117-40a0-a5fa-7038047e75d5","Type":"ContainerDied","Data":"a3d755ec7e2403ce841049ef585b88f6cbc4908275411093c0e9befccee621e7"} Dec 02 20:35:44 crc kubenswrapper[4796]: I1202 20:35:44.922439 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:44 crc kubenswrapper[4796]: I1202 20:35:44.970640 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-config-data\") pod \"f678676b-e117-40a0-a5fa-7038047e75d5\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " Dec 02 20:35:44 crc kubenswrapper[4796]: I1202 20:35:44.970722 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-combined-ca-bundle\") pod \"f678676b-e117-40a0-a5fa-7038047e75d5\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " Dec 02 20:35:44 crc kubenswrapper[4796]: I1202 20:35:44.970867 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-db-sync-config-data\") pod \"f678676b-e117-40a0-a5fa-7038047e75d5\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " Dec 02 20:35:44 crc kubenswrapper[4796]: I1202 20:35:44.970957 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kp7h\" (UniqueName: \"kubernetes.io/projected/f678676b-e117-40a0-a5fa-7038047e75d5-kube-api-access-6kp7h\") pod \"f678676b-e117-40a0-a5fa-7038047e75d5\" (UID: \"f678676b-e117-40a0-a5fa-7038047e75d5\") " Dec 02 20:35:44 crc kubenswrapper[4796]: I1202 20:35:44.978446 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f678676b-e117-40a0-a5fa-7038047e75d5" (UID: "f678676b-e117-40a0-a5fa-7038047e75d5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:44 crc kubenswrapper[4796]: I1202 20:35:44.988500 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f678676b-e117-40a0-a5fa-7038047e75d5-kube-api-access-6kp7h" (OuterVolumeSpecName: "kube-api-access-6kp7h") pod "f678676b-e117-40a0-a5fa-7038047e75d5" (UID: "f678676b-e117-40a0-a5fa-7038047e75d5"). InnerVolumeSpecName "kube-api-access-6kp7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.006238 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f678676b-e117-40a0-a5fa-7038047e75d5" (UID: "f678676b-e117-40a0-a5fa-7038047e75d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.023321 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-config-data" (OuterVolumeSpecName: "config-data") pod "f678676b-e117-40a0-a5fa-7038047e75d5" (UID: "f678676b-e117-40a0-a5fa-7038047e75d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.073199 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.073536 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kp7h\" (UniqueName: \"kubernetes.io/projected/f678676b-e117-40a0-a5fa-7038047e75d5-kube-api-access-6kp7h\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.073565 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.073579 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f678676b-e117-40a0-a5fa-7038047e75d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.474428 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" event={"ID":"f678676b-e117-40a0-a5fa-7038047e75d5","Type":"ContainerDied","Data":"87c334275b3dbf27e9fe078ca46ecc0a2aab08f97519e32ce1157ca34406a276"} Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.474486 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87c334275b3dbf27e9fe078ca46ecc0a2aab08f97519e32ce1157ca34406a276" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.474598 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pvj29" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.807976 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:45 crc kubenswrapper[4796]: E1202 20:35:45.808483 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f678676b-e117-40a0-a5fa-7038047e75d5" containerName="watcher-kuttl-db-sync" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.808508 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f678676b-e117-40a0-a5fa-7038047e75d5" containerName="watcher-kuttl-db-sync" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.808747 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f678676b-e117-40a0-a5fa-7038047e75d5" containerName="watcher-kuttl-db-sync" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.811622 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.822389 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.825573 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.825899 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-2dwhf" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.826071 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.826504 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.836331 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.838670 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.841418 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.847601 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.861987 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.866742 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.869143 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.933387 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hstqv\" (UniqueName: \"kubernetes.io/projected/3935e453-d78c-4ddb-8de1-51b4699e33be-kube-api-access-hstqv\") pod \"watcher-kuttl-applier-0\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.933469 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3935e453-d78c-4ddb-8de1-51b4699e33be-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.933743 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgh66\" (UniqueName: \"kubernetes.io/projected/59820ff0-e57d-4496-8566-09abbba3dd5a-kube-api-access-rgh66\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.933814 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.933845 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.933872 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.933956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.933983 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.934012 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3935e453-d78c-4ddb-8de1-51b4699e33be-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.934178 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59820ff0-e57d-4496-8566-09abbba3dd5a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.934207 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.934882 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2m4\" (UniqueName: \"kubernetes.io/projected/cc870eb3-bc6e-44eb-ab6d-3c786278a702-kube-api-access-wb2m4\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.934922 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc870eb3-bc6e-44eb-ab6d-3c786278a702-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.935176 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3935e453-d78c-4ddb-8de1-51b4699e33be-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.935351 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.935380 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:45 crc kubenswrapper[4796]: I1202 20:35:45.935926 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.038404 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.039755 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.039898 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hstqv\" (UniqueName: \"kubernetes.io/projected/3935e453-d78c-4ddb-8de1-51b4699e33be-kube-api-access-hstqv\") pod \"watcher-kuttl-applier-0\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.039984 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3935e453-d78c-4ddb-8de1-51b4699e33be-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.040054 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgh66\" (UniqueName: \"kubernetes.io/projected/59820ff0-e57d-4496-8566-09abbba3dd5a-kube-api-access-rgh66\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.040143 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.040212 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.040296 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.040402 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.040478 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.040593 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3935e453-d78c-4ddb-8de1-51b4699e33be-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.040672 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59820ff0-e57d-4496-8566-09abbba3dd5a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.040755 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.040846 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2m4\" (UniqueName: \"kubernetes.io/projected/cc870eb3-bc6e-44eb-ab6d-3c786278a702-kube-api-access-wb2m4\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.040917 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc870eb3-bc6e-44eb-ab6d-3c786278a702-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.041024 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3935e453-d78c-4ddb-8de1-51b4699e33be-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.042633 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59820ff0-e57d-4496-8566-09abbba3dd5a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.045304 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3935e453-d78c-4ddb-8de1-51b4699e33be-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.047345 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc870eb3-bc6e-44eb-ab6d-3c786278a702-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.053032 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.056018 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.056126 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3935e453-d78c-4ddb-8de1-51b4699e33be-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.062737 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.063835 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.064631 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.064656 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.064735 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.065145 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3935e453-d78c-4ddb-8de1-51b4699e33be-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.072812 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.073379 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2m4\" (UniqueName: \"kubernetes.io/projected/cc870eb3-bc6e-44eb-ab6d-3c786278a702-kube-api-access-wb2m4\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.090834 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgh66\" (UniqueName: \"kubernetes.io/projected/59820ff0-e57d-4496-8566-09abbba3dd5a-kube-api-access-rgh66\") pod \"watcher-kuttl-api-0\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.096080 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hstqv\" (UniqueName: \"kubernetes.io/projected/3935e453-d78c-4ddb-8de1-51b4699e33be-kube-api-access-hstqv\") pod \"watcher-kuttl-applier-0\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.129924 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.154830 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.184787 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.752514 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:35:46 crc kubenswrapper[4796]: W1202 20:35:46.760408 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3935e453_d78c_4ddb_8de1_51b4699e33be.slice/crio-02e051a98bde7ca6b750f5aedfd6af78bd885b3b13f177cb6c2f727652fd6f4b WatchSource:0}: Error finding container 02e051a98bde7ca6b750f5aedfd6af78bd885b3b13f177cb6c2f727652fd6f4b: Status 404 returned error can't find the container with id 02e051a98bde7ca6b750f5aedfd6af78bd885b3b13f177cb6c2f727652fd6f4b Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.764301 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:35:46 crc kubenswrapper[4796]: W1202 20:35:46.769409 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc870eb3_bc6e_44eb_ab6d_3c786278a702.slice/crio-d4c48d6a0940509d928db5678a8a3a31eac36ce8024d733a9c2162b44828605d WatchSource:0}: Error finding container d4c48d6a0940509d928db5678a8a3a31eac36ce8024d733a9c2162b44828605d: Status 404 returned error can't find the container with id d4c48d6a0940509d928db5678a8a3a31eac36ce8024d733a9c2162b44828605d Dec 02 20:35:46 crc kubenswrapper[4796]: I1202 20:35:46.879304 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:35:46 crc kubenswrapper[4796]: W1202 20:35:46.880966 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59820ff0_e57d_4496_8566_09abbba3dd5a.slice/crio-3de3bdf5502f1b52a2b7090495c8644887e2602b6eecc2f9c3ce687cda8a6157 WatchSource:0}: Error finding container 3de3bdf5502f1b52a2b7090495c8644887e2602b6eecc2f9c3ce687cda8a6157: Status 404 returned error can't find the container with id 3de3bdf5502f1b52a2b7090495c8644887e2602b6eecc2f9c3ce687cda8a6157 Dec 02 20:35:47 crc kubenswrapper[4796]: I1202 20:35:47.516531 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cc870eb3-bc6e-44eb-ab6d-3c786278a702","Type":"ContainerStarted","Data":"25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f"} Dec 02 20:35:47 crc kubenswrapper[4796]: I1202 20:35:47.517390 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cc870eb3-bc6e-44eb-ab6d-3c786278a702","Type":"ContainerStarted","Data":"d4c48d6a0940509d928db5678a8a3a31eac36ce8024d733a9c2162b44828605d"} Dec 02 20:35:47 crc kubenswrapper[4796]: I1202 20:35:47.538092 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"3935e453-d78c-4ddb-8de1-51b4699e33be","Type":"ContainerStarted","Data":"14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe"} Dec 02 20:35:47 crc kubenswrapper[4796]: I1202 20:35:47.538143 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"3935e453-d78c-4ddb-8de1-51b4699e33be","Type":"ContainerStarted","Data":"02e051a98bde7ca6b750f5aedfd6af78bd885b3b13f177cb6c2f727652fd6f4b"} Dec 02 20:35:47 crc kubenswrapper[4796]: I1202 20:35:47.548950 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"59820ff0-e57d-4496-8566-09abbba3dd5a","Type":"ContainerStarted","Data":"cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0"} Dec 02 20:35:47 crc kubenswrapper[4796]: I1202 20:35:47.549008 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"59820ff0-e57d-4496-8566-09abbba3dd5a","Type":"ContainerStarted","Data":"7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532"} Dec 02 20:35:47 crc kubenswrapper[4796]: I1202 20:35:47.549018 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"59820ff0-e57d-4496-8566-09abbba3dd5a","Type":"ContainerStarted","Data":"3de3bdf5502f1b52a2b7090495c8644887e2602b6eecc2f9c3ce687cda8a6157"} Dec 02 20:35:47 crc kubenswrapper[4796]: I1202 20:35:47.549420 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:47 crc kubenswrapper[4796]: I1202 20:35:47.550546 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="59820ff0-e57d-4496-8566-09abbba3dd5a" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.159:9322/\": dial tcp 10.217.0.159:9322: connect: connection refused" Dec 02 20:35:47 crc kubenswrapper[4796]: I1202 20:35:47.580235 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.580215826 podStartE2EDuration="2.580215826s" podCreationTimestamp="2025-12-02 20:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:35:47.577853198 +0000 UTC m=+1430.581228732" watchObservedRunningTime="2025-12-02 20:35:47.580215826 +0000 UTC m=+1430.583591360" Dec 02 20:35:47 crc kubenswrapper[4796]: I1202 20:35:47.587623 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.587608435 podStartE2EDuration="2.587608435s" podCreationTimestamp="2025-12-02 20:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:35:47.561616975 +0000 UTC m=+1430.564992509" watchObservedRunningTime="2025-12-02 20:35:47.587608435 +0000 UTC m=+1430.590983969" Dec 02 20:35:47 crc kubenswrapper[4796]: I1202 20:35:47.606702 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.606683338 podStartE2EDuration="2.606683338s" podCreationTimestamp="2025-12-02 20:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:35:47.601464271 +0000 UTC m=+1430.604839805" watchObservedRunningTime="2025-12-02 20:35:47.606683338 +0000 UTC m=+1430.610058872" Dec 02 20:35:50 crc kubenswrapper[4796]: I1202 20:35:50.789381 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:51 crc kubenswrapper[4796]: I1202 20:35:51.131068 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:51 crc kubenswrapper[4796]: I1202 20:35:51.155498 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:56 crc kubenswrapper[4796]: I1202 20:35:56.132381 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:56 crc kubenswrapper[4796]: I1202 20:35:56.143046 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:56 crc kubenswrapper[4796]: I1202 20:35:56.155838 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:56 crc kubenswrapper[4796]: I1202 20:35:56.186593 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:56 crc kubenswrapper[4796]: I1202 20:35:56.187824 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:56 crc kubenswrapper[4796]: I1202 20:35:56.241510 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:56 crc kubenswrapper[4796]: I1202 20:35:56.638212 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:56 crc kubenswrapper[4796]: I1202 20:35:56.646358 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:35:56 crc kubenswrapper[4796]: I1202 20:35:56.670618 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:35:56 crc kubenswrapper[4796]: I1202 20:35:56.674575 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:35:58 crc kubenswrapper[4796]: I1202 20:35:58.925604 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:35:58 crc kubenswrapper[4796]: I1202 20:35:58.926122 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="ceilometer-central-agent" containerID="cri-o://64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3" gracePeriod=30 Dec 02 20:35:58 crc kubenswrapper[4796]: I1202 20:35:58.926187 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="proxy-httpd" containerID="cri-o://65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605" gracePeriod=30 Dec 02 20:35:58 crc kubenswrapper[4796]: I1202 20:35:58.926239 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="sg-core" containerID="cri-o://619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1" gracePeriod=30 Dec 02 20:35:58 crc kubenswrapper[4796]: I1202 20:35:58.926294 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="ceilometer-notification-agent" containerID="cri-o://1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241" gracePeriod=30 Dec 02 20:35:58 crc kubenswrapper[4796]: I1202 20:35:58.934635 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.155:3000/\": read tcp 10.217.0.2:41580->10.217.0.155:3000: read: connection reset by peer" Dec 02 20:35:59 crc kubenswrapper[4796]: I1202 20:35:59.664855 4796 generic.go:334] "Generic (PLEG): container finished" podID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerID="65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605" exitCode=0 Dec 02 20:35:59 crc kubenswrapper[4796]: I1202 20:35:59.665216 4796 generic.go:334] "Generic (PLEG): container finished" podID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerID="619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1" exitCode=2 Dec 02 20:35:59 crc kubenswrapper[4796]: I1202 20:35:59.665351 4796 generic.go:334] "Generic (PLEG): container finished" podID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerID="64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3" exitCode=0 Dec 02 20:35:59 crc kubenswrapper[4796]: I1202 20:35:59.664923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"214f3adf-67a1-48bf-a163-3e9f45139e88","Type":"ContainerDied","Data":"65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605"} Dec 02 20:35:59 crc kubenswrapper[4796]: I1202 20:35:59.665544 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"214f3adf-67a1-48bf-a163-3e9f45139e88","Type":"ContainerDied","Data":"619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1"} Dec 02 20:35:59 crc kubenswrapper[4796]: I1202 20:35:59.665630 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"214f3adf-67a1-48bf-a163-3e9f45139e88","Type":"ContainerDied","Data":"64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3"} Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.431162 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.523435 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/214f3adf-67a1-48bf-a163-3e9f45139e88-run-httpd\") pod \"214f3adf-67a1-48bf-a163-3e9f45139e88\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.523502 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-config-data\") pod \"214f3adf-67a1-48bf-a163-3e9f45139e88\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.523574 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-sg-core-conf-yaml\") pod \"214f3adf-67a1-48bf-a163-3e9f45139e88\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.523616 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-scripts\") pod \"214f3adf-67a1-48bf-a163-3e9f45139e88\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.523678 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/214f3adf-67a1-48bf-a163-3e9f45139e88-log-httpd\") pod \"214f3adf-67a1-48bf-a163-3e9f45139e88\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.523740 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-ceilometer-tls-certs\") pod \"214f3adf-67a1-48bf-a163-3e9f45139e88\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.523793 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvsv5\" (UniqueName: \"kubernetes.io/projected/214f3adf-67a1-48bf-a163-3e9f45139e88-kube-api-access-rvsv5\") pod \"214f3adf-67a1-48bf-a163-3e9f45139e88\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.523819 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-combined-ca-bundle\") pod \"214f3adf-67a1-48bf-a163-3e9f45139e88\" (UID: \"214f3adf-67a1-48bf-a163-3e9f45139e88\") " Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.524852 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214f3adf-67a1-48bf-a163-3e9f45139e88-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "214f3adf-67a1-48bf-a163-3e9f45139e88" (UID: "214f3adf-67a1-48bf-a163-3e9f45139e88"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.524972 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214f3adf-67a1-48bf-a163-3e9f45139e88-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "214f3adf-67a1-48bf-a163-3e9f45139e88" (UID: "214f3adf-67a1-48bf-a163-3e9f45139e88"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.531604 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214f3adf-67a1-48bf-a163-3e9f45139e88-kube-api-access-rvsv5" (OuterVolumeSpecName: "kube-api-access-rvsv5") pod "214f3adf-67a1-48bf-a163-3e9f45139e88" (UID: "214f3adf-67a1-48bf-a163-3e9f45139e88"). InnerVolumeSpecName "kube-api-access-rvsv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.550037 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-scripts" (OuterVolumeSpecName: "scripts") pod "214f3adf-67a1-48bf-a163-3e9f45139e88" (UID: "214f3adf-67a1-48bf-a163-3e9f45139e88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.565149 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "214f3adf-67a1-48bf-a163-3e9f45139e88" (UID: "214f3adf-67a1-48bf-a163-3e9f45139e88"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.596351 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "214f3adf-67a1-48bf-a163-3e9f45139e88" (UID: "214f3adf-67a1-48bf-a163-3e9f45139e88"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.603772 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "214f3adf-67a1-48bf-a163-3e9f45139e88" (UID: "214f3adf-67a1-48bf-a163-3e9f45139e88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.625873 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.626148 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/214f3adf-67a1-48bf-a163-3e9f45139e88-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.626233 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.626334 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvsv5\" (UniqueName: \"kubernetes.io/projected/214f3adf-67a1-48bf-a163-3e9f45139e88-kube-api-access-rvsv5\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.626422 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.626484 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/214f3adf-67a1-48bf-a163-3e9f45139e88-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.626545 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.642754 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-config-data" (OuterVolumeSpecName: "config-data") pod "214f3adf-67a1-48bf-a163-3e9f45139e88" (UID: "214f3adf-67a1-48bf-a163-3e9f45139e88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.677064 4796 generic.go:334] "Generic (PLEG): container finished" podID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerID="1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241" exitCode=0 Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.677271 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"214f3adf-67a1-48bf-a163-3e9f45139e88","Type":"ContainerDied","Data":"1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241"} Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.677427 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"214f3adf-67a1-48bf-a163-3e9f45139e88","Type":"ContainerDied","Data":"25f4314e5a94bb9d395928a9305073e0deec533e70e0980655cb70d74474b05e"} Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.677376 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.677452 4796 scope.go:117] "RemoveContainer" containerID="65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.706752 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.708403 4796 scope.go:117] "RemoveContainer" containerID="619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.715947 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.724826 4796 scope.go:117] "RemoveContainer" containerID="1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.728643 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214f3adf-67a1-48bf-a163-3e9f45139e88-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.758320 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:36:00 crc kubenswrapper[4796]: E1202 20:36:00.758683 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="sg-core" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.758700 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="sg-core" Dec 02 20:36:00 crc kubenswrapper[4796]: E1202 20:36:00.758719 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="proxy-httpd" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.758725 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="proxy-httpd" Dec 02 20:36:00 crc kubenswrapper[4796]: E1202 20:36:00.758738 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="ceilometer-central-agent" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.758745 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="ceilometer-central-agent" Dec 02 20:36:00 crc kubenswrapper[4796]: E1202 20:36:00.758758 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="ceilometer-notification-agent" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.758764 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="ceilometer-notification-agent" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.758907 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="ceilometer-notification-agent" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.758917 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="ceilometer-central-agent" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.758929 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="sg-core" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.758940 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" containerName="proxy-httpd" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.761172 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.763294 4796 scope.go:117] "RemoveContainer" containerID="64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.764474 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.764519 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.764679 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.771605 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.791945 4796 scope.go:117] "RemoveContainer" containerID="65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605" Dec 02 20:36:00 crc kubenswrapper[4796]: E1202 20:36:00.792685 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605\": container with ID starting with 65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605 not found: ID does not exist" containerID="65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.792796 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605"} err="failed to get container status \"65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605\": rpc error: code = NotFound desc = could not find container \"65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605\": container with ID starting with 65eaf90eb48f8b23674976898ecdc2662cb67c90fe4b9eb440f8ef21787d2605 not found: ID does not exist" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.792884 4796 scope.go:117] "RemoveContainer" containerID="619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1" Dec 02 20:36:00 crc kubenswrapper[4796]: E1202 20:36:00.793296 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1\": container with ID starting with 619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1 not found: ID does not exist" containerID="619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.793336 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1"} err="failed to get container status \"619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1\": rpc error: code = NotFound desc = could not find container \"619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1\": container with ID starting with 619e0f1370574433b99b0ffab14539a0edbedf0df454e03c596247eb14dce9d1 not found: ID does not exist" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.793362 4796 scope.go:117] "RemoveContainer" containerID="1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241" Dec 02 20:36:00 crc kubenswrapper[4796]: E1202 20:36:00.793659 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241\": container with ID starting with 1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241 not found: ID does not exist" containerID="1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.793748 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241"} err="failed to get container status \"1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241\": rpc error: code = NotFound desc = could not find container \"1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241\": container with ID starting with 1ba0919dcbf05ee12fc4e180e7b92270b5c064551da2fce2bdcbc737fd35f241 not found: ID does not exist" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.793826 4796 scope.go:117] "RemoveContainer" containerID="64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3" Dec 02 20:36:00 crc kubenswrapper[4796]: E1202 20:36:00.804453 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3\": container with ID starting with 64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3 not found: ID does not exist" containerID="64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.804650 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3"} err="failed to get container status \"64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3\": rpc error: code = NotFound desc = could not find container \"64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3\": container with ID starting with 64b198ceb27022e8bd0fa760bbb6ee77b6df88281e7c74a8adedb502f30504d3 not found: ID does not exist" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.829880 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-scripts\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.829935 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.829969 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/825f702b-3e58-4760-a47b-865cf3080bce-log-httpd\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.829990 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-config-data\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.830071 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.830102 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlfhc\" (UniqueName: \"kubernetes.io/projected/825f702b-3e58-4760-a47b-865cf3080bce-kube-api-access-mlfhc\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.830288 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.830486 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/825f702b-3e58-4760-a47b-865cf3080bce-run-httpd\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.931993 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.932155 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/825f702b-3e58-4760-a47b-865cf3080bce-run-httpd\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.932226 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-scripts\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.932301 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.932345 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/825f702b-3e58-4760-a47b-865cf3080bce-log-httpd\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.932492 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-config-data\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.932560 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.932619 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlfhc\" (UniqueName: \"kubernetes.io/projected/825f702b-3e58-4760-a47b-865cf3080bce-kube-api-access-mlfhc\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.933144 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/825f702b-3e58-4760-a47b-865cf3080bce-run-httpd\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.937209 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.937374 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-config-data\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.937626 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.937728 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/825f702b-3e58-4760-a47b-865cf3080bce-log-httpd\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.942743 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.944403 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-scripts\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:00 crc kubenswrapper[4796]: I1202 20:36:00.949861 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlfhc\" (UniqueName: \"kubernetes.io/projected/825f702b-3e58-4760-a47b-865cf3080bce-kube-api-access-mlfhc\") pod \"ceilometer-0\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:01 crc kubenswrapper[4796]: I1202 20:36:01.080121 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:01 crc kubenswrapper[4796]: I1202 20:36:01.281687 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214f3adf-67a1-48bf-a163-3e9f45139e88" path="/var/lib/kubelet/pods/214f3adf-67a1-48bf-a163-3e9f45139e88/volumes" Dec 02 20:36:01 crc kubenswrapper[4796]: I1202 20:36:01.607472 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:36:01 crc kubenswrapper[4796]: W1202 20:36:01.614763 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod825f702b_3e58_4760_a47b_865cf3080bce.slice/crio-3e70c565ba0c6382b8755e911bc304a237508b1b753ff36284d50978e459cee9 WatchSource:0}: Error finding container 3e70c565ba0c6382b8755e911bc304a237508b1b753ff36284d50978e459cee9: Status 404 returned error can't find the container with id 3e70c565ba0c6382b8755e911bc304a237508b1b753ff36284d50978e459cee9 Dec 02 20:36:01 crc kubenswrapper[4796]: I1202 20:36:01.688307 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"825f702b-3e58-4760-a47b-865cf3080bce","Type":"ContainerStarted","Data":"3e70c565ba0c6382b8755e911bc304a237508b1b753ff36284d50978e459cee9"} Dec 02 20:36:02 crc kubenswrapper[4796]: I1202 20:36:02.713811 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"825f702b-3e58-4760-a47b-865cf3080bce","Type":"ContainerStarted","Data":"d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63"} Dec 02 20:36:03 crc kubenswrapper[4796]: I1202 20:36:03.729540 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"825f702b-3e58-4760-a47b-865cf3080bce","Type":"ContainerStarted","Data":"55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf"} Dec 02 20:36:04 crc kubenswrapper[4796]: I1202 20:36:04.753024 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"825f702b-3e58-4760-a47b-865cf3080bce","Type":"ContainerStarted","Data":"960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6"} Dec 02 20:36:05 crc kubenswrapper[4796]: I1202 20:36:05.772479 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"825f702b-3e58-4760-a47b-865cf3080bce","Type":"ContainerStarted","Data":"b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a"} Dec 02 20:36:05 crc kubenswrapper[4796]: I1202 20:36:05.772867 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:05 crc kubenswrapper[4796]: I1202 20:36:05.802227 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.566437833 podStartE2EDuration="5.802199471s" podCreationTimestamp="2025-12-02 20:36:00 +0000 UTC" firstStartedPulling="2025-12-02 20:36:01.619630569 +0000 UTC m=+1444.623006103" lastFinishedPulling="2025-12-02 20:36:04.855392197 +0000 UTC m=+1447.858767741" observedRunningTime="2025-12-02 20:36:05.794249329 +0000 UTC m=+1448.797624903" watchObservedRunningTime="2025-12-02 20:36:05.802199471 +0000 UTC m=+1448.805575005" Dec 02 20:36:11 crc kubenswrapper[4796]: I1202 20:36:11.921566 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Dec 02 20:36:11 crc kubenswrapper[4796]: I1202 20:36:11.922420 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/memcached-0" podUID="ebca558f-ecfa-4b05-b9df-b59f884f0366" containerName="memcached" containerID="cri-o://44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42" gracePeriod=30 Dec 02 20:36:11 crc kubenswrapper[4796]: I1202 20:36:11.994142 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:36:11 crc kubenswrapper[4796]: I1202 20:36:11.994467 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="cc870eb3-bc6e-44eb-ab6d-3c786278a702" containerName="watcher-decision-engine" containerID="cri-o://25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f" gracePeriod=30 Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.033685 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.034041 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="3935e453-d78c-4ddb-8de1-51b4699e33be" containerName="watcher-applier" containerID="cri-o://14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe" gracePeriod=30 Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.109184 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.109447 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="59820ff0-e57d-4496-8566-09abbba3dd5a" containerName="watcher-kuttl-api-log" containerID="cri-o://7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532" gracePeriod=30 Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.109847 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="59820ff0-e57d-4496-8566-09abbba3dd5a" containerName="watcher-api" containerID="cri-o://cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0" gracePeriod=30 Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.153275 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-rxxpw"] Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.162343 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-rxxpw"] Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.241938 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-v825p"] Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.243081 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.246119 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-mtls" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.247769 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.264319 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-v825p"] Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.358477 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-scripts\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.358623 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-credential-keys\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.358667 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8mm9\" (UniqueName: \"kubernetes.io/projected/1ac1760c-0cd2-480a-8315-4054fc65f81a-kube-api-access-g8mm9\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.358703 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-cert-memcached-mtls\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.358918 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-config-data\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.359015 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-fernet-keys\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.359225 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-combined-ca-bundle\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.461172 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-combined-ca-bundle\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.461274 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-scripts\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.461340 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-credential-keys\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.461375 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8mm9\" (UniqueName: \"kubernetes.io/projected/1ac1760c-0cd2-480a-8315-4054fc65f81a-kube-api-access-g8mm9\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.461410 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-cert-memcached-mtls\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.461446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-config-data\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.461469 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-fernet-keys\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.467451 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-credential-keys\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.467856 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-cert-memcached-mtls\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.468595 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-combined-ca-bundle\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.468703 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-scripts\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.472319 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-config-data\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.472890 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-fernet-keys\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.491821 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8mm9\" (UniqueName: \"kubernetes.io/projected/1ac1760c-0cd2-480a-8315-4054fc65f81a-kube-api-access-g8mm9\") pod \"keystone-bootstrap-v825p\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.568172 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.870481 4796 generic.go:334] "Generic (PLEG): container finished" podID="59820ff0-e57d-4496-8566-09abbba3dd5a" containerID="7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532" exitCode=143 Dec 02 20:36:12 crc kubenswrapper[4796]: I1202 20:36:12.870529 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"59820ff0-e57d-4496-8566-09abbba3dd5a","Type":"ContainerDied","Data":"7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532"} Dec 02 20:36:13 crc kubenswrapper[4796]: E1202 20:36:13.228201 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59820ff0_e57d_4496_8566_09abbba3dd5a.slice/crio-conmon-cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0.scope\": RecentStats: unable to find data in memory cache]" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.240057 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-v825p"] Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.278632 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bbd64eb-2206-45b9-b6c5-8be80b5ae862" path="/var/lib/kubelet/pods/7bbd64eb-2206-45b9-b6c5-8be80b5ae862/volumes" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.449295 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.556111 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.594765 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-internal-tls-certs\") pod \"59820ff0-e57d-4496-8566-09abbba3dd5a\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.594873 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-config-data\") pod \"59820ff0-e57d-4496-8566-09abbba3dd5a\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.594967 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59820ff0-e57d-4496-8566-09abbba3dd5a-logs\") pod \"59820ff0-e57d-4496-8566-09abbba3dd5a\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.595005 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-combined-ca-bundle\") pod \"59820ff0-e57d-4496-8566-09abbba3dd5a\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.595023 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-public-tls-certs\") pod \"59820ff0-e57d-4496-8566-09abbba3dd5a\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.595121 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgh66\" (UniqueName: \"kubernetes.io/projected/59820ff0-e57d-4496-8566-09abbba3dd5a-kube-api-access-rgh66\") pod \"59820ff0-e57d-4496-8566-09abbba3dd5a\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.595817 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-custom-prometheus-ca\") pod \"59820ff0-e57d-4496-8566-09abbba3dd5a\" (UID: \"59820ff0-e57d-4496-8566-09abbba3dd5a\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.597554 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59820ff0-e57d-4496-8566-09abbba3dd5a-logs" (OuterVolumeSpecName: "logs") pod "59820ff0-e57d-4496-8566-09abbba3dd5a" (UID: "59820ff0-e57d-4496-8566-09abbba3dd5a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.628643 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "59820ff0-e57d-4496-8566-09abbba3dd5a" (UID: "59820ff0-e57d-4496-8566-09abbba3dd5a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.634397 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59820ff0-e57d-4496-8566-09abbba3dd5a" (UID: "59820ff0-e57d-4496-8566-09abbba3dd5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.651493 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59820ff0-e57d-4496-8566-09abbba3dd5a-kube-api-access-rgh66" (OuterVolumeSpecName: "kube-api-access-rgh66") pod "59820ff0-e57d-4496-8566-09abbba3dd5a" (UID: "59820ff0-e57d-4496-8566-09abbba3dd5a"). InnerVolumeSpecName "kube-api-access-rgh66". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.663907 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "59820ff0-e57d-4496-8566-09abbba3dd5a" (UID: "59820ff0-e57d-4496-8566-09abbba3dd5a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.671496 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-config-data" (OuterVolumeSpecName: "config-data") pod "59820ff0-e57d-4496-8566-09abbba3dd5a" (UID: "59820ff0-e57d-4496-8566-09abbba3dd5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.672674 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "59820ff0-e57d-4496-8566-09abbba3dd5a" (UID: "59820ff0-e57d-4496-8566-09abbba3dd5a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.697424 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebca558f-ecfa-4b05-b9df-b59f884f0366-combined-ca-bundle\") pod \"ebca558f-ecfa-4b05-b9df-b59f884f0366\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.697535 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ebca558f-ecfa-4b05-b9df-b59f884f0366-kolla-config\") pod \"ebca558f-ecfa-4b05-b9df-b59f884f0366\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.697619 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebca558f-ecfa-4b05-b9df-b59f884f0366-memcached-tls-certs\") pod \"ebca558f-ecfa-4b05-b9df-b59f884f0366\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.697704 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hr4j\" (UniqueName: \"kubernetes.io/projected/ebca558f-ecfa-4b05-b9df-b59f884f0366-kube-api-access-6hr4j\") pod \"ebca558f-ecfa-4b05-b9df-b59f884f0366\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.697731 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebca558f-ecfa-4b05-b9df-b59f884f0366-config-data\") pod \"ebca558f-ecfa-4b05-b9df-b59f884f0366\" (UID: \"ebca558f-ecfa-4b05-b9df-b59f884f0366\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.698053 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59820ff0-e57d-4496-8566-09abbba3dd5a-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.698072 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.698084 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.698096 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgh66\" (UniqueName: \"kubernetes.io/projected/59820ff0-e57d-4496-8566-09abbba3dd5a-kube-api-access-rgh66\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.698104 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.698112 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.698121 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59820ff0-e57d-4496-8566-09abbba3dd5a-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.698804 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebca558f-ecfa-4b05-b9df-b59f884f0366-config-data" (OuterVolumeSpecName: "config-data") pod "ebca558f-ecfa-4b05-b9df-b59f884f0366" (UID: "ebca558f-ecfa-4b05-b9df-b59f884f0366"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.704194 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebca558f-ecfa-4b05-b9df-b59f884f0366-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ebca558f-ecfa-4b05-b9df-b59f884f0366" (UID: "ebca558f-ecfa-4b05-b9df-b59f884f0366"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.715696 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebca558f-ecfa-4b05-b9df-b59f884f0366-kube-api-access-6hr4j" (OuterVolumeSpecName: "kube-api-access-6hr4j") pod "ebca558f-ecfa-4b05-b9df-b59f884f0366" (UID: "ebca558f-ecfa-4b05-b9df-b59f884f0366"). InnerVolumeSpecName "kube-api-access-6hr4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.727359 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebca558f-ecfa-4b05-b9df-b59f884f0366-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebca558f-ecfa-4b05-b9df-b59f884f0366" (UID: "ebca558f-ecfa-4b05-b9df-b59f884f0366"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.742685 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebca558f-ecfa-4b05-b9df-b59f884f0366-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "ebca558f-ecfa-4b05-b9df-b59f884f0366" (UID: "ebca558f-ecfa-4b05-b9df-b59f884f0366"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.799588 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebca558f-ecfa-4b05-b9df-b59f884f0366-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.799636 4796 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ebca558f-ecfa-4b05-b9df-b59f884f0366-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.799648 4796 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebca558f-ecfa-4b05-b9df-b59f884f0366-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.799659 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hr4j\" (UniqueName: \"kubernetes.io/projected/ebca558f-ecfa-4b05-b9df-b59f884f0366-kube-api-access-6hr4j\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.799675 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebca558f-ecfa-4b05-b9df-b59f884f0366-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.850653 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.883040 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-v825p" event={"ID":"1ac1760c-0cd2-480a-8315-4054fc65f81a","Type":"ContainerStarted","Data":"e1ecd5375a0d4335d7df648084177b4caf6c9c39a105b64285947284d2114186"} Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.883124 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-v825p" event={"ID":"1ac1760c-0cd2-480a-8315-4054fc65f81a","Type":"ContainerStarted","Data":"b5c014ad73848b60a2bd1169a368e52c830824d6da69d60f5d4c326199969a94"} Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.886126 4796 generic.go:334] "Generic (PLEG): container finished" podID="ebca558f-ecfa-4b05-b9df-b59f884f0366" containerID="44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42" exitCode=0 Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.886187 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"ebca558f-ecfa-4b05-b9df-b59f884f0366","Type":"ContainerDied","Data":"44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42"} Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.886211 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"ebca558f-ecfa-4b05-b9df-b59f884f0366","Type":"ContainerDied","Data":"ff4a8380c7245b581d230d863d413ba391bb99bd8b774b79e660b7088eaacb0e"} Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.886231 4796 scope.go:117] "RemoveContainer" containerID="44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.886351 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.888504 4796 generic.go:334] "Generic (PLEG): container finished" podID="3935e453-d78c-4ddb-8de1-51b4699e33be" containerID="14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe" exitCode=0 Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.888569 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"3935e453-d78c-4ddb-8de1-51b4699e33be","Type":"ContainerDied","Data":"14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe"} Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.888603 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"3935e453-d78c-4ddb-8de1-51b4699e33be","Type":"ContainerDied","Data":"02e051a98bde7ca6b750f5aedfd6af78bd885b3b13f177cb6c2f727652fd6f4b"} Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.888662 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.897309 4796 generic.go:334] "Generic (PLEG): container finished" podID="59820ff0-e57d-4496-8566-09abbba3dd5a" containerID="cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0" exitCode=0 Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.897350 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"59820ff0-e57d-4496-8566-09abbba3dd5a","Type":"ContainerDied","Data":"cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0"} Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.897376 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"59820ff0-e57d-4496-8566-09abbba3dd5a","Type":"ContainerDied","Data":"3de3bdf5502f1b52a2b7090495c8644887e2602b6eecc2f9c3ce687cda8a6157"} Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.897430 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.900372 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3935e453-d78c-4ddb-8de1-51b4699e33be-config-data\") pod \"3935e453-d78c-4ddb-8de1-51b4699e33be\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.900456 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3935e453-d78c-4ddb-8de1-51b4699e33be-combined-ca-bundle\") pod \"3935e453-d78c-4ddb-8de1-51b4699e33be\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.900486 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hstqv\" (UniqueName: \"kubernetes.io/projected/3935e453-d78c-4ddb-8de1-51b4699e33be-kube-api-access-hstqv\") pod \"3935e453-d78c-4ddb-8de1-51b4699e33be\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.900557 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3935e453-d78c-4ddb-8de1-51b4699e33be-logs\") pod \"3935e453-d78c-4ddb-8de1-51b4699e33be\" (UID: \"3935e453-d78c-4ddb-8de1-51b4699e33be\") " Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.901172 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3935e453-d78c-4ddb-8de1-51b4699e33be-logs" (OuterVolumeSpecName: "logs") pod "3935e453-d78c-4ddb-8de1-51b4699e33be" (UID: "3935e453-d78c-4ddb-8de1-51b4699e33be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.905469 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3935e453-d78c-4ddb-8de1-51b4699e33be-kube-api-access-hstqv" (OuterVolumeSpecName: "kube-api-access-hstqv") pod "3935e453-d78c-4ddb-8de1-51b4699e33be" (UID: "3935e453-d78c-4ddb-8de1-51b4699e33be"). InnerVolumeSpecName "kube-api-access-hstqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.911547 4796 scope.go:117] "RemoveContainer" containerID="44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42" Dec 02 20:36:13 crc kubenswrapper[4796]: E1202 20:36:13.913808 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42\": container with ID starting with 44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42 not found: ID does not exist" containerID="44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.913847 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42"} err="failed to get container status \"44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42\": rpc error: code = NotFound desc = could not find container \"44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42\": container with ID starting with 44041b0d083362feba64bcc6395ec8c36e5201ec9d61421fd1e94207dd43de42 not found: ID does not exist" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.913870 4796 scope.go:117] "RemoveContainer" containerID="14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.938098 4796 scope.go:117] "RemoveContainer" containerID="14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe" Dec 02 20:36:13 crc kubenswrapper[4796]: E1202 20:36:13.946917 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe\": container with ID starting with 14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe not found: ID does not exist" containerID="14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.946967 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe"} err="failed to get container status \"14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe\": rpc error: code = NotFound desc = could not find container \"14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe\": container with ID starting with 14cc6d0ebd4d0d47b43be90385e3b25e068b565b3e85d0ce4492fa4c321611fe not found: ID does not exist" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.946994 4796 scope.go:117] "RemoveContainer" containerID="cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.948461 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3935e453-d78c-4ddb-8de1-51b4699e33be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3935e453-d78c-4ddb-8de1-51b4699e33be" (UID: "3935e453-d78c-4ddb-8de1-51b4699e33be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.953393 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-v825p" podStartSLOduration=1.953371514 podStartE2EDuration="1.953371514s" podCreationTimestamp="2025-12-02 20:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:36:13.915498925 +0000 UTC m=+1456.918874479" watchObservedRunningTime="2025-12-02 20:36:13.953371514 +0000 UTC m=+1456.956747058" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.978331 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.995421 4796 scope.go:117] "RemoveContainer" containerID="7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532" Dec 02 20:36:13 crc kubenswrapper[4796]: I1202 20:36:13.999497 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/memcached-0"] Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.003009 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3935e453-d78c-4ddb-8de1-51b4699e33be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.003032 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hstqv\" (UniqueName: \"kubernetes.io/projected/3935e453-d78c-4ddb-8de1-51b4699e33be-kube-api-access-hstqv\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.003041 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3935e453-d78c-4ddb-8de1-51b4699e33be-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.042490 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3935e453-d78c-4ddb-8de1-51b4699e33be-config-data" (OuterVolumeSpecName: "config-data") pod "3935e453-d78c-4ddb-8de1-51b4699e33be" (UID: "3935e453-d78c-4ddb-8de1-51b4699e33be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.043519 4796 scope.go:117] "RemoveContainer" containerID="cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0" Dec 02 20:36:14 crc kubenswrapper[4796]: E1202 20:36:14.048398 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0\": container with ID starting with cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0 not found: ID does not exist" containerID="cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.048493 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0"} err="failed to get container status \"cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0\": rpc error: code = NotFound desc = could not find container \"cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0\": container with ID starting with cee4b2da8c892f5a5d7cabe31a25378a64025e20d65b1c579f6678946677aed0 not found: ID does not exist" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.048538 4796 scope.go:117] "RemoveContainer" containerID="7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532" Dec 02 20:36:14 crc kubenswrapper[4796]: E1202 20:36:14.048915 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532\": container with ID starting with 7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532 not found: ID does not exist" containerID="7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.048942 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532"} err="failed to get container status \"7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532\": rpc error: code = NotFound desc = could not find container \"7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532\": container with ID starting with 7ac0292ac146801e6875b6eea531d64e78ec1f04f2be59369c96da79cd021532 not found: ID does not exist" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.065185 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Dec 02 20:36:14 crc kubenswrapper[4796]: E1202 20:36:14.071085 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59820ff0-e57d-4496-8566-09abbba3dd5a" containerName="watcher-api" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.071129 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="59820ff0-e57d-4496-8566-09abbba3dd5a" containerName="watcher-api" Dec 02 20:36:14 crc kubenswrapper[4796]: E1202 20:36:14.071210 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3935e453-d78c-4ddb-8de1-51b4699e33be" containerName="watcher-applier" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.071225 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3935e453-d78c-4ddb-8de1-51b4699e33be" containerName="watcher-applier" Dec 02 20:36:14 crc kubenswrapper[4796]: E1202 20:36:14.071247 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59820ff0-e57d-4496-8566-09abbba3dd5a" containerName="watcher-kuttl-api-log" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.071273 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="59820ff0-e57d-4496-8566-09abbba3dd5a" containerName="watcher-kuttl-api-log" Dec 02 20:36:14 crc kubenswrapper[4796]: E1202 20:36:14.071303 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebca558f-ecfa-4b05-b9df-b59f884f0366" containerName="memcached" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.071311 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebca558f-ecfa-4b05-b9df-b59f884f0366" containerName="memcached" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.072462 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="59820ff0-e57d-4496-8566-09abbba3dd5a" containerName="watcher-api" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.072514 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="59820ff0-e57d-4496-8566-09abbba3dd5a" containerName="watcher-kuttl-api-log" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.072539 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3935e453-d78c-4ddb-8de1-51b4699e33be" containerName="watcher-applier" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.072560 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebca558f-ecfa-4b05-b9df-b59f884f0366" containerName="memcached" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.075638 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.079381 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-mkczv" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.080358 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.080821 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.103844 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.105951 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2919071b-425d-403e-9bff-fccdd9142500-config-data\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.106283 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9jt2\" (UniqueName: \"kubernetes.io/projected/2919071b-425d-403e-9bff-fccdd9142500-kube-api-access-g9jt2\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.106488 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2919071b-425d-403e-9bff-fccdd9142500-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.106594 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2919071b-425d-403e-9bff-fccdd9142500-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.106731 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2919071b-425d-403e-9bff-fccdd9142500-kolla-config\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.106928 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3935e453-d78c-4ddb-8de1-51b4699e33be-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.124046 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.131840 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.139814 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.141755 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.144667 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.144781 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.145120 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.148012 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.208156 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2919071b-425d-403e-9bff-fccdd9142500-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.209420 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2919071b-425d-403e-9bff-fccdd9142500-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.211173 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2919071b-425d-403e-9bff-fccdd9142500-kolla-config\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.210414 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2919071b-425d-403e-9bff-fccdd9142500-kolla-config\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.211385 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.212040 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.212194 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2919071b-425d-403e-9bff-fccdd9142500-config-data\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.212237 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9jt2\" (UniqueName: \"kubernetes.io/projected/2919071b-425d-403e-9bff-fccdd9142500-kube-api-access-g9jt2\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.212285 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8srt\" (UniqueName: \"kubernetes.io/projected/6c128dd3-15ff-4e59-9154-503963c6f219-kube-api-access-v8srt\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.212350 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.212415 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.212443 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c128dd3-15ff-4e59-9154-503963c6f219-logs\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.212469 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2919071b-425d-403e-9bff-fccdd9142500-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.212485 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.212567 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.212999 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2919071b-425d-403e-9bff-fccdd9142500-config-data\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.222456 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2919071b-425d-403e-9bff-fccdd9142500-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.231509 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.238384 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.239097 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9jt2\" (UniqueName: \"kubernetes.io/projected/2919071b-425d-403e-9bff-fccdd9142500-kube-api-access-g9jt2\") pod \"memcached-0\" (UID: \"2919071b-425d-403e-9bff-fccdd9142500\") " pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.248075 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.249245 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.260871 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.262165 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.314839 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8srt\" (UniqueName: \"kubernetes.io/projected/6c128dd3-15ff-4e59-9154-503963c6f219-kube-api-access-v8srt\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.314895 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.314932 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.314976 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c128dd3-15ff-4e59-9154-503963c6f219-logs\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.315049 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.315090 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.315116 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.315155 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b17e49e-2ba6-420e-913c-841bcf939fae-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.315172 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6n9j\" (UniqueName: \"kubernetes.io/projected/1b17e49e-2ba6-420e-913c-841bcf939fae-kube-api-access-q6n9j\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.315218 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.315238 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.315273 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.315301 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.315731 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c128dd3-15ff-4e59-9154-503963c6f219-logs\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.319868 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.320038 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.319973 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.321709 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.325337 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.327760 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.347930 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8srt\" (UniqueName: \"kubernetes.io/projected/6c128dd3-15ff-4e59-9154-503963c6f219-kube-api-access-v8srt\") pod \"watcher-kuttl-api-0\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.409690 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.422579 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b17e49e-2ba6-420e-913c-841bcf939fae-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.422636 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6n9j\" (UniqueName: \"kubernetes.io/projected/1b17e49e-2ba6-420e-913c-841bcf939fae-kube-api-access-q6n9j\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.422691 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.422735 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.422838 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.423027 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b17e49e-2ba6-420e-913c-841bcf939fae-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.428739 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.432165 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.432869 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.445443 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6n9j\" (UniqueName: \"kubernetes.io/projected/1b17e49e-2ba6-420e-913c-841bcf939fae-kube-api-access-q6n9j\") pod \"watcher-kuttl-applier-0\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.463691 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:14 crc kubenswrapper[4796]: I1202 20:36:14.713655 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.021534 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.225935 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:36:15 crc kubenswrapper[4796]: W1202 20:36:15.226549 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c128dd3_15ff_4e59_9154_503963c6f219.slice/crio-1faa9ad740d9d013262b7437e92427016daa11b76a799dd9166bf26d1db4e408 WatchSource:0}: Error finding container 1faa9ad740d9d013262b7437e92427016daa11b76a799dd9166bf26d1db4e408: Status 404 returned error can't find the container with id 1faa9ad740d9d013262b7437e92427016daa11b76a799dd9166bf26d1db4e408 Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.277898 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3935e453-d78c-4ddb-8de1-51b4699e33be" path="/var/lib/kubelet/pods/3935e453-d78c-4ddb-8de1-51b4699e33be/volumes" Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.279565 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59820ff0-e57d-4496-8566-09abbba3dd5a" path="/var/lib/kubelet/pods/59820ff0-e57d-4496-8566-09abbba3dd5a/volumes" Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.280596 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebca558f-ecfa-4b05-b9df-b59f884f0366" path="/var/lib/kubelet/pods/ebca558f-ecfa-4b05-b9df-b59f884f0366/volumes" Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.289809 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.931012 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"6c128dd3-15ff-4e59-9154-503963c6f219","Type":"ContainerStarted","Data":"64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad"} Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.931563 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.931589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"6c128dd3-15ff-4e59-9154-503963c6f219","Type":"ContainerStarted","Data":"3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd"} Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.931610 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"6c128dd3-15ff-4e59-9154-503963c6f219","Type":"ContainerStarted","Data":"1faa9ad740d9d013262b7437e92427016daa11b76a799dd9166bf26d1db4e408"} Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.932837 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"2919071b-425d-403e-9bff-fccdd9142500","Type":"ContainerStarted","Data":"da4d4c94bd28ace6c5d19a47d5081cff48a88483cb692f2076f4ed2df5388b39"} Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.932886 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"2919071b-425d-403e-9bff-fccdd9142500","Type":"ContainerStarted","Data":"301ee1a1afde6eca022dbced99db743980092c7b72349cfb4ab318c16b1088cb"} Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.932965 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.933498 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="6c128dd3-15ff-4e59-9154-503963c6f219" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.165:9322/\": dial tcp 10.217.0.165:9322: connect: connection refused" Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.935195 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1b17e49e-2ba6-420e-913c-841bcf939fae","Type":"ContainerStarted","Data":"25f9d883bfe27f5baa25f86181b23b7a35da24f5f9c82fe960faf1a1eefb32ea"} Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.935226 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1b17e49e-2ba6-420e-913c-841bcf939fae","Type":"ContainerStarted","Data":"615333164dc197ca4bca4513324d214fecc77c606e2631e86be4adfd17ecb686"} Dec 02 20:36:15 crc kubenswrapper[4796]: I1202 20:36:15.967573 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.967542379 podStartE2EDuration="2.967542379s" podCreationTimestamp="2025-12-02 20:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:36:15.959579656 +0000 UTC m=+1458.962955190" watchObservedRunningTime="2025-12-02 20:36:15.967542379 +0000 UTC m=+1458.970917913" Dec 02 20:36:16 crc kubenswrapper[4796]: I1202 20:36:16.000803 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=3.000779635 podStartE2EDuration="3.000779635s" podCreationTimestamp="2025-12-02 20:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:36:15.99394446 +0000 UTC m=+1458.997319994" watchObservedRunningTime="2025-12-02 20:36:16.000779635 +0000 UTC m=+1459.004155169" Dec 02 20:36:16 crc kubenswrapper[4796]: I1202 20:36:16.028629 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.02860962 podStartE2EDuration="2.02860962s" podCreationTimestamp="2025-12-02 20:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:36:16.016777624 +0000 UTC m=+1459.020153168" watchObservedRunningTime="2025-12-02 20:36:16.02860962 +0000 UTC m=+1459.031985154" Dec 02 20:36:16 crc kubenswrapper[4796]: E1202 20:36:16.188070 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Dec 02 20:36:16 crc kubenswrapper[4796]: E1202 20:36:16.189604 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Dec 02 20:36:16 crc kubenswrapper[4796]: E1202 20:36:16.190815 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Dec 02 20:36:16 crc kubenswrapper[4796]: E1202 20:36:16.190854 4796 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="cc870eb3-bc6e-44eb-ab6d-3c786278a702" containerName="watcher-decision-engine" Dec 02 20:36:16 crc kubenswrapper[4796]: I1202 20:36:16.946087 4796 generic.go:334] "Generic (PLEG): container finished" podID="1ac1760c-0cd2-480a-8315-4054fc65f81a" containerID="e1ecd5375a0d4335d7df648084177b4caf6c9c39a105b64285947284d2114186" exitCode=0 Dec 02 20:36:16 crc kubenswrapper[4796]: I1202 20:36:16.946145 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-v825p" event={"ID":"1ac1760c-0cd2-480a-8315-4054fc65f81a","Type":"ContainerDied","Data":"e1ecd5375a0d4335d7df648084177b4caf6c9c39a105b64285947284d2114186"} Dec 02 20:36:17 crc kubenswrapper[4796]: I1202 20:36:17.884722 4796 scope.go:117] "RemoveContainer" containerID="d590de5afef42d93b9a38bbfd4aa330e60d1f4f7d70c536db133d6f9063367ac" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.448631 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.507067 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-fernet-keys\") pod \"1ac1760c-0cd2-480a-8315-4054fc65f81a\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.507197 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-credential-keys\") pod \"1ac1760c-0cd2-480a-8315-4054fc65f81a\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.507235 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-config-data\") pod \"1ac1760c-0cd2-480a-8315-4054fc65f81a\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.507314 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-cert-memcached-mtls\") pod \"1ac1760c-0cd2-480a-8315-4054fc65f81a\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.507353 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-scripts\") pod \"1ac1760c-0cd2-480a-8315-4054fc65f81a\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.507401 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8mm9\" (UniqueName: \"kubernetes.io/projected/1ac1760c-0cd2-480a-8315-4054fc65f81a-kube-api-access-g8mm9\") pod \"1ac1760c-0cd2-480a-8315-4054fc65f81a\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.507467 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-combined-ca-bundle\") pod \"1ac1760c-0cd2-480a-8315-4054fc65f81a\" (UID: \"1ac1760c-0cd2-480a-8315-4054fc65f81a\") " Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.538582 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1ac1760c-0cd2-480a-8315-4054fc65f81a" (UID: "1ac1760c-0cd2-480a-8315-4054fc65f81a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.542861 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ac1760c-0cd2-480a-8315-4054fc65f81a-kube-api-access-g8mm9" (OuterVolumeSpecName: "kube-api-access-g8mm9") pod "1ac1760c-0cd2-480a-8315-4054fc65f81a" (UID: "1ac1760c-0cd2-480a-8315-4054fc65f81a"). InnerVolumeSpecName "kube-api-access-g8mm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.548478 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-scripts" (OuterVolumeSpecName: "scripts") pod "1ac1760c-0cd2-480a-8315-4054fc65f81a" (UID: "1ac1760c-0cd2-480a-8315-4054fc65f81a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.555636 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1ac1760c-0cd2-480a-8315-4054fc65f81a" (UID: "1ac1760c-0cd2-480a-8315-4054fc65f81a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.583069 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-config-data" (OuterVolumeSpecName: "config-data") pod "1ac1760c-0cd2-480a-8315-4054fc65f81a" (UID: "1ac1760c-0cd2-480a-8315-4054fc65f81a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.587410 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ac1760c-0cd2-480a-8315-4054fc65f81a" (UID: "1ac1760c-0cd2-480a-8315-4054fc65f81a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.608909 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.608939 4796 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.608948 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.608957 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.608965 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8mm9\" (UniqueName: \"kubernetes.io/projected/1ac1760c-0cd2-480a-8315-4054fc65f81a-kube-api-access-g8mm9\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.608973 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.624116 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1ac1760c-0cd2-480a-8315-4054fc65f81a" (UID: "1ac1760c-0cd2-480a-8315-4054fc65f81a"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:18 crc kubenswrapper[4796]: I1202 20:36:18.710997 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1ac1760c-0cd2-480a-8315-4054fc65f81a-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.025638 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-v825p" event={"ID":"1ac1760c-0cd2-480a-8315-4054fc65f81a","Type":"ContainerDied","Data":"b5c014ad73848b60a2bd1169a368e52c830824d6da69d60f5d4c326199969a94"} Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.025957 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5c014ad73848b60a2bd1169a368e52c830824d6da69d60f5d4c326199969a94" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.026039 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-v825p" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.329804 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.366997 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.464972 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.532043 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-config-data\") pod \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.532124 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-combined-ca-bundle\") pod \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.532176 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-custom-prometheus-ca\") pod \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.532240 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb2m4\" (UniqueName: \"kubernetes.io/projected/cc870eb3-bc6e-44eb-ab6d-3c786278a702-kube-api-access-wb2m4\") pod \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.532323 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc870eb3-bc6e-44eb-ab6d-3c786278a702-logs\") pod \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\" (UID: \"cc870eb3-bc6e-44eb-ab6d-3c786278a702\") " Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.532964 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc870eb3-bc6e-44eb-ab6d-3c786278a702-logs" (OuterVolumeSpecName: "logs") pod "cc870eb3-bc6e-44eb-ab6d-3c786278a702" (UID: "cc870eb3-bc6e-44eb-ab6d-3c786278a702"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.552530 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc870eb3-bc6e-44eb-ab6d-3c786278a702-kube-api-access-wb2m4" (OuterVolumeSpecName: "kube-api-access-wb2m4") pod "cc870eb3-bc6e-44eb-ab6d-3c786278a702" (UID: "cc870eb3-bc6e-44eb-ab6d-3c786278a702"). InnerVolumeSpecName "kube-api-access-wb2m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.562365 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "cc870eb3-bc6e-44eb-ab6d-3c786278a702" (UID: "cc870eb3-bc6e-44eb-ab6d-3c786278a702"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.563890 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc870eb3-bc6e-44eb-ab6d-3c786278a702" (UID: "cc870eb3-bc6e-44eb-ab6d-3c786278a702"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.587465 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-config-data" (OuterVolumeSpecName: "config-data") pod "cc870eb3-bc6e-44eb-ab6d-3c786278a702" (UID: "cc870eb3-bc6e-44eb-ab6d-3c786278a702"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.634339 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.634383 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.634399 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cc870eb3-bc6e-44eb-ab6d-3c786278a702-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.634412 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb2m4\" (UniqueName: \"kubernetes.io/projected/cc870eb3-bc6e-44eb-ab6d-3c786278a702-kube-api-access-wb2m4\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.634424 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc870eb3-bc6e-44eb-ab6d-3c786278a702-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:19 crc kubenswrapper[4796]: I1202 20:36:19.714908 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.036554 4796 generic.go:334] "Generic (PLEG): container finished" podID="cc870eb3-bc6e-44eb-ab6d-3c786278a702" containerID="25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f" exitCode=0 Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.036616 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cc870eb3-bc6e-44eb-ab6d-3c786278a702","Type":"ContainerDied","Data":"25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f"} Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.036931 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cc870eb3-bc6e-44eb-ab6d-3c786278a702","Type":"ContainerDied","Data":"d4c48d6a0940509d928db5678a8a3a31eac36ce8024d733a9c2162b44828605d"} Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.036977 4796 scope.go:117] "RemoveContainer" containerID="25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.036640 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.070772 4796 scope.go:117] "RemoveContainer" containerID="25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f" Dec 02 20:36:20 crc kubenswrapper[4796]: E1202 20:36:20.071390 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f\": container with ID starting with 25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f not found: ID does not exist" containerID="25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.071447 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f"} err="failed to get container status \"25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f\": rpc error: code = NotFound desc = could not find container \"25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f\": container with ID starting with 25229629b08b8f03c4439b35d1fd3c02e7e70ac188a57cd16fdabc846065e78f not found: ID does not exist" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.097540 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.113334 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.125916 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:36:20 crc kubenswrapper[4796]: E1202 20:36:20.126534 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac1760c-0cd2-480a-8315-4054fc65f81a" containerName="keystone-bootstrap" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.126568 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac1760c-0cd2-480a-8315-4054fc65f81a" containerName="keystone-bootstrap" Dec 02 20:36:20 crc kubenswrapper[4796]: E1202 20:36:20.126625 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc870eb3-bc6e-44eb-ab6d-3c786278a702" containerName="watcher-decision-engine" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.126640 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc870eb3-bc6e-44eb-ab6d-3c786278a702" containerName="watcher-decision-engine" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.126988 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc870eb3-bc6e-44eb-ab6d-3c786278a702" containerName="watcher-decision-engine" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.127035 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac1760c-0cd2-480a-8315-4054fc65f81a" containerName="keystone-bootstrap" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.128145 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.131966 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.139380 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.244870 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcxn6\" (UniqueName: \"kubernetes.io/projected/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-kube-api-access-kcxn6\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.245291 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.245337 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.245398 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.245463 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.245488 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.347353 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.347425 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.347632 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcxn6\" (UniqueName: \"kubernetes.io/projected/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-kube-api-access-kcxn6\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.347680 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.347714 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.347765 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.348398 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.353865 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.354352 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.355173 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.356204 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.377496 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcxn6\" (UniqueName: \"kubernetes.io/projected/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-kube-api-access-kcxn6\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.480936 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:20 crc kubenswrapper[4796]: I1202 20:36:20.965171 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:36:21 crc kubenswrapper[4796]: I1202 20:36:21.090023 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8d488a05-dbc8-49d8-921b-7fe2b03a8eef","Type":"ContainerStarted","Data":"c9601c296e41c765cb3c5f2648e57c6a8c8c07de93b88d775b1df9b50e98cd35"} Dec 02 20:36:21 crc kubenswrapper[4796]: I1202 20:36:21.278503 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc870eb3-bc6e-44eb-ab6d-3c786278a702" path="/var/lib/kubelet/pods/cc870eb3-bc6e-44eb-ab6d-3c786278a702/volumes" Dec 02 20:36:22 crc kubenswrapper[4796]: I1202 20:36:22.108473 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8d488a05-dbc8-49d8-921b-7fe2b03a8eef","Type":"ContainerStarted","Data":"47e8efee9dd04fbd6799794418af3768051d531e7be8185a0af5546b8dbdd82a"} Dec 02 20:36:22 crc kubenswrapper[4796]: I1202 20:36:22.127403 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.12738133 podStartE2EDuration="2.12738133s" podCreationTimestamp="2025-12-02 20:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:36:22.123990178 +0000 UTC m=+1465.127365712" watchObservedRunningTime="2025-12-02 20:36:22.12738133 +0000 UTC m=+1465.130756864" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.411166 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.464527 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.476395 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.546771 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-5d85f8c497-qddnq"] Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.547835 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.564923 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-5d85f8c497-qddnq"] Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.644484 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-internal-tls-certs\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.644528 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-fernet-keys\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.644566 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-config-data\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.644586 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-cert-memcached-mtls\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.644719 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-public-tls-certs\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.644809 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvtc8\" (UniqueName: \"kubernetes.io/projected/fa9cece5-f7ba-4e35-918f-33714da53c64-kube-api-access-wvtc8\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.644892 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-credential-keys\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.644916 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-combined-ca-bundle\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.644979 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-scripts\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.715399 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.746324 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-scripts\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.746414 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-internal-tls-certs\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.746438 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-fernet-keys\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.746473 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-config-data\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.746497 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-cert-memcached-mtls\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.746537 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-public-tls-certs\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.746589 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvtc8\" (UniqueName: \"kubernetes.io/projected/fa9cece5-f7ba-4e35-918f-33714da53c64-kube-api-access-wvtc8\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.746630 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-credential-keys\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.746650 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-combined-ca-bundle\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.752033 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.753680 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-internal-tls-certs\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.755099 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-fernet-keys\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.755125 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-config-data\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.756459 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-scripts\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.757785 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-cert-memcached-mtls\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.767069 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-combined-ca-bundle\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.767090 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-public-tls-certs\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.767569 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9cece5-f7ba-4e35-918f-33714da53c64-credential-keys\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.769895 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvtc8\" (UniqueName: \"kubernetes.io/projected/fa9cece5-f7ba-4e35-918f-33714da53c64-kube-api-access-wvtc8\") pod \"keystone-5d85f8c497-qddnq\" (UID: \"fa9cece5-f7ba-4e35-918f-33714da53c64\") " pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:24 crc kubenswrapper[4796]: I1202 20:36:24.879657 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:25 crc kubenswrapper[4796]: I1202 20:36:25.144332 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:25 crc kubenswrapper[4796]: I1202 20:36:25.166652 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:36:25 crc kubenswrapper[4796]: I1202 20:36:25.338201 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-5d85f8c497-qddnq"] Dec 02 20:36:25 crc kubenswrapper[4796]: W1202 20:36:25.349410 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa9cece5_f7ba_4e35_918f_33714da53c64.slice/crio-f9ec5a69ce6111bd73bb82ef9e1a13e69a6dff300b72487ee3c59234a9bc0443 WatchSource:0}: Error finding container f9ec5a69ce6111bd73bb82ef9e1a13e69a6dff300b72487ee3c59234a9bc0443: Status 404 returned error can't find the container with id f9ec5a69ce6111bd73bb82ef9e1a13e69a6dff300b72487ee3c59234a9bc0443 Dec 02 20:36:26 crc kubenswrapper[4796]: I1202 20:36:26.146840 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" event={"ID":"fa9cece5-f7ba-4e35-918f-33714da53c64","Type":"ContainerStarted","Data":"c3d298c3e3a29a95fd520d559d10dde4efb0f03fdb73049576d556cc57328fee"} Dec 02 20:36:26 crc kubenswrapper[4796]: I1202 20:36:26.147184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" event={"ID":"fa9cece5-f7ba-4e35-918f-33714da53c64","Type":"ContainerStarted","Data":"f9ec5a69ce6111bd73bb82ef9e1a13e69a6dff300b72487ee3c59234a9bc0443"} Dec 02 20:36:26 crc kubenswrapper[4796]: I1202 20:36:26.148209 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:26 crc kubenswrapper[4796]: I1202 20:36:26.169989 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" podStartSLOduration=2.169974366 podStartE2EDuration="2.169974366s" podCreationTimestamp="2025-12-02 20:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:36:26.168811778 +0000 UTC m=+1469.172187332" watchObservedRunningTime="2025-12-02 20:36:26.169974366 +0000 UTC m=+1469.173349900" Dec 02 20:36:27 crc kubenswrapper[4796]: I1202 20:36:27.581711 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:36:27 crc kubenswrapper[4796]: I1202 20:36:27.582198 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="6c128dd3-15ff-4e59-9154-503963c6f219" containerName="watcher-kuttl-api-log" containerID="cri-o://3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd" gracePeriod=30 Dec 02 20:36:27 crc kubenswrapper[4796]: I1202 20:36:27.582391 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="6c128dd3-15ff-4e59-9154-503963c6f219" containerName="watcher-api" containerID="cri-o://64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad" gracePeriod=30 Dec 02 20:36:28 crc kubenswrapper[4796]: I1202 20:36:28.166671 4796 generic.go:334] "Generic (PLEG): container finished" podID="6c128dd3-15ff-4e59-9154-503963c6f219" containerID="3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd" exitCode=143 Dec 02 20:36:28 crc kubenswrapper[4796]: I1202 20:36:28.167889 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"6c128dd3-15ff-4e59-9154-503963c6f219","Type":"ContainerDied","Data":"3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd"} Dec 02 20:36:28 crc kubenswrapper[4796]: I1202 20:36:28.907946 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.032763 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-combined-ca-bundle\") pod \"6c128dd3-15ff-4e59-9154-503963c6f219\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.032821 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-cert-memcached-mtls\") pod \"6c128dd3-15ff-4e59-9154-503963c6f219\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.032850 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-internal-tls-certs\") pod \"6c128dd3-15ff-4e59-9154-503963c6f219\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.032891 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c128dd3-15ff-4e59-9154-503963c6f219-logs\") pod \"6c128dd3-15ff-4e59-9154-503963c6f219\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.032931 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-config-data\") pod \"6c128dd3-15ff-4e59-9154-503963c6f219\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.032967 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-public-tls-certs\") pod \"6c128dd3-15ff-4e59-9154-503963c6f219\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.032985 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-custom-prometheus-ca\") pod \"6c128dd3-15ff-4e59-9154-503963c6f219\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.033048 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8srt\" (UniqueName: \"kubernetes.io/projected/6c128dd3-15ff-4e59-9154-503963c6f219-kube-api-access-v8srt\") pod \"6c128dd3-15ff-4e59-9154-503963c6f219\" (UID: \"6c128dd3-15ff-4e59-9154-503963c6f219\") " Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.034906 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c128dd3-15ff-4e59-9154-503963c6f219-logs" (OuterVolumeSpecName: "logs") pod "6c128dd3-15ff-4e59-9154-503963c6f219" (UID: "6c128dd3-15ff-4e59-9154-503963c6f219"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.039731 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c128dd3-15ff-4e59-9154-503963c6f219-kube-api-access-v8srt" (OuterVolumeSpecName: "kube-api-access-v8srt") pod "6c128dd3-15ff-4e59-9154-503963c6f219" (UID: "6c128dd3-15ff-4e59-9154-503963c6f219"). InnerVolumeSpecName "kube-api-access-v8srt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.067676 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c128dd3-15ff-4e59-9154-503963c6f219" (UID: "6c128dd3-15ff-4e59-9154-503963c6f219"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.076399 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "6c128dd3-15ff-4e59-9154-503963c6f219" (UID: "6c128dd3-15ff-4e59-9154-503963c6f219"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.098597 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6c128dd3-15ff-4e59-9154-503963c6f219" (UID: "6c128dd3-15ff-4e59-9154-503963c6f219"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.098743 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6c128dd3-15ff-4e59-9154-503963c6f219" (UID: "6c128dd3-15ff-4e59-9154-503963c6f219"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.102443 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-config-data" (OuterVolumeSpecName: "config-data") pod "6c128dd3-15ff-4e59-9154-503963c6f219" (UID: "6c128dd3-15ff-4e59-9154-503963c6f219"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.126459 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "6c128dd3-15ff-4e59-9154-503963c6f219" (UID: "6c128dd3-15ff-4e59-9154-503963c6f219"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.134965 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.135008 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.135020 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.135032 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c128dd3-15ff-4e59-9154-503963c6f219-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.135045 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.135056 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.135066 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6c128dd3-15ff-4e59-9154-503963c6f219-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.135079 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8srt\" (UniqueName: \"kubernetes.io/projected/6c128dd3-15ff-4e59-9154-503963c6f219-kube-api-access-v8srt\") on node \"crc\" DevicePath \"\"" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.176318 4796 generic.go:334] "Generic (PLEG): container finished" podID="6c128dd3-15ff-4e59-9154-503963c6f219" containerID="64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad" exitCode=0 Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.176365 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"6c128dd3-15ff-4e59-9154-503963c6f219","Type":"ContainerDied","Data":"64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad"} Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.176394 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"6c128dd3-15ff-4e59-9154-503963c6f219","Type":"ContainerDied","Data":"1faa9ad740d9d013262b7437e92427016daa11b76a799dd9166bf26d1db4e408"} Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.176411 4796 scope.go:117] "RemoveContainer" containerID="64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.176551 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.203537 4796 scope.go:117] "RemoveContainer" containerID="3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.217891 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.224406 4796 scope.go:117] "RemoveContainer" containerID="64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.224489 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:36:29 crc kubenswrapper[4796]: E1202 20:36:29.225115 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad\": container with ID starting with 64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad not found: ID does not exist" containerID="64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.225179 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad"} err="failed to get container status \"64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad\": rpc error: code = NotFound desc = could not find container \"64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad\": container with ID starting with 64a08f1c513929146cc41957a410cda233c3f0715228e910043c2700c7ff82ad not found: ID does not exist" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.225214 4796 scope.go:117] "RemoveContainer" containerID="3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd" Dec 02 20:36:29 crc kubenswrapper[4796]: E1202 20:36:29.225518 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd\": container with ID starting with 3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd not found: ID does not exist" containerID="3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.225605 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd"} err="failed to get container status \"3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd\": rpc error: code = NotFound desc = could not find container \"3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd\": container with ID starting with 3dbae66bf0c35f14878e8464296a0d22c3101711c74b6c44115f792d9f41cbfd not found: ID does not exist" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.241507 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:36:29 crc kubenswrapper[4796]: E1202 20:36:29.241910 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c128dd3-15ff-4e59-9154-503963c6f219" containerName="watcher-kuttl-api-log" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.241930 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c128dd3-15ff-4e59-9154-503963c6f219" containerName="watcher-kuttl-api-log" Dec 02 20:36:29 crc kubenswrapper[4796]: E1202 20:36:29.241945 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c128dd3-15ff-4e59-9154-503963c6f219" containerName="watcher-api" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.241952 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c128dd3-15ff-4e59-9154-503963c6f219" containerName="watcher-api" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.242103 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c128dd3-15ff-4e59-9154-503963c6f219" containerName="watcher-kuttl-api-log" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.242135 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c128dd3-15ff-4e59-9154-503963c6f219" containerName="watcher-api" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.243166 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.249680 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.258429 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.281634 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c128dd3-15ff-4e59-9154-503963c6f219" path="/var/lib/kubelet/pods/6c128dd3-15ff-4e59-9154-503963c6f219/volumes" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.337310 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tvh9\" (UniqueName: \"kubernetes.io/projected/afcd4408-b152-47e6-9c31-477cc4dcb04e-kube-api-access-9tvh9\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.337390 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.337418 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.337476 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.337529 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afcd4408-b152-47e6-9c31-477cc4dcb04e-logs\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.337641 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.439160 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.439239 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afcd4408-b152-47e6-9c31-477cc4dcb04e-logs\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.439348 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.439408 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tvh9\" (UniqueName: \"kubernetes.io/projected/afcd4408-b152-47e6-9c31-477cc4dcb04e-kube-api-access-9tvh9\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.439439 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.439464 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.439739 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afcd4408-b152-47e6-9c31-477cc4dcb04e-logs\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.443217 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.445850 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.446490 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.446698 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.469887 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tvh9\" (UniqueName: \"kubernetes.io/projected/afcd4408-b152-47e6-9c31-477cc4dcb04e-kube-api-access-9tvh9\") pod \"watcher-kuttl-api-0\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:29 crc kubenswrapper[4796]: I1202 20:36:29.566286 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:30 crc kubenswrapper[4796]: I1202 20:36:30.025214 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:36:30 crc kubenswrapper[4796]: I1202 20:36:30.188119 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"afcd4408-b152-47e6-9c31-477cc4dcb04e","Type":"ContainerStarted","Data":"344d09f3580528029c0556f8f337a3ce1a7b43706454852949f3d9d4c4ab29e8"} Dec 02 20:36:30 crc kubenswrapper[4796]: I1202 20:36:30.485444 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:30 crc kubenswrapper[4796]: I1202 20:36:30.523101 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:31 crc kubenswrapper[4796]: I1202 20:36:31.088225 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:36:31 crc kubenswrapper[4796]: I1202 20:36:31.201114 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"afcd4408-b152-47e6-9c31-477cc4dcb04e","Type":"ContainerStarted","Data":"bc13b842818936536cc2335ac9d7d0e926da1a247f79d2e5bc2e3e0aa3a6de08"} Dec 02 20:36:31 crc kubenswrapper[4796]: I1202 20:36:31.201182 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:31 crc kubenswrapper[4796]: I1202 20:36:31.201197 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"afcd4408-b152-47e6-9c31-477cc4dcb04e","Type":"ContainerStarted","Data":"a0d660d6923d2d9de1bcb4bceb31465e953b2e5bc292d4035e3af86e7f508e0e"} Dec 02 20:36:31 crc kubenswrapper[4796]: I1202 20:36:31.201473 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:31 crc kubenswrapper[4796]: I1202 20:36:31.256679 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:36:31 crc kubenswrapper[4796]: I1202 20:36:31.279411 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.279390419 podStartE2EDuration="2.279390419s" podCreationTimestamp="2025-12-02 20:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:36:31.243693282 +0000 UTC m=+1474.247068806" watchObservedRunningTime="2025-12-02 20:36:31.279390419 +0000 UTC m=+1474.282765953" Dec 02 20:36:33 crc kubenswrapper[4796]: I1202 20:36:33.216353 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 20:36:33 crc kubenswrapper[4796]: I1202 20:36:33.456354 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:34 crc kubenswrapper[4796]: I1202 20:36:34.566929 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:39 crc kubenswrapper[4796]: I1202 20:36:39.566796 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:39 crc kubenswrapper[4796]: I1202 20:36:39.576566 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:40 crc kubenswrapper[4796]: I1202 20:36:40.284673 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:36:55 crc kubenswrapper[4796]: I1202 20:36:55.189868 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:36:55 crc kubenswrapper[4796]: I1202 20:36:55.190634 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:36:56 crc kubenswrapper[4796]: I1202 20:36:56.503716 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-5d85f8c497-qddnq" Dec 02 20:36:56 crc kubenswrapper[4796]: I1202 20:36:56.618450 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2"] Dec 02 20:36:56 crc kubenswrapper[4796]: I1202 20:36:56.618728 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" podUID="3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" containerName="keystone-api" containerID="cri-o://953318fa009c34af9afc15fdfb8ea644384e81dc112a7334158e887d2d33642f" gracePeriod=30 Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.518126 4796 generic.go:334] "Generic (PLEG): container finished" podID="3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" containerID="953318fa009c34af9afc15fdfb8ea644384e81dc112a7334158e887d2d33642f" exitCode=0 Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.518667 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" event={"ID":"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac","Type":"ContainerDied","Data":"953318fa009c34af9afc15fdfb8ea644384e81dc112a7334158e887d2d33642f"} Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.518946 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" event={"ID":"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac","Type":"ContainerDied","Data":"ac472f08b79d22fd20ae45c7daae02f6516acb82aac3450a6040dcac10bb7efa"} Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.518965 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac472f08b79d22fd20ae45c7daae02f6516acb82aac3450a6040dcac10bb7efa" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.583072 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.746408 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-combined-ca-bundle\") pod \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.746816 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-public-tls-certs\") pod \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.746942 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsflm\" (UniqueName: \"kubernetes.io/projected/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-kube-api-access-bsflm\") pod \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.746991 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-scripts\") pod \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.747093 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-internal-tls-certs\") pod \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.747146 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-credential-keys\") pod \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.747173 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-fernet-keys\") pod \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.747332 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-config-data\") pod \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\" (UID: \"3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac\") " Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.762658 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-scripts" (OuterVolumeSpecName: "scripts") pod "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" (UID: "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.762880 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-kube-api-access-bsflm" (OuterVolumeSpecName: "kube-api-access-bsflm") pod "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" (UID: "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac"). InnerVolumeSpecName "kube-api-access-bsflm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.763456 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" (UID: "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.769464 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" (UID: "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.794009 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-config-data" (OuterVolumeSpecName: "config-data") pod "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" (UID: "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.829704 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" (UID: "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.831375 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" (UID: "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.844794 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" (UID: "3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.849053 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.849088 4796 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.849099 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.849107 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.849117 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.849125 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.849133 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsflm\" (UniqueName: \"kubernetes.io/projected/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-kube-api-access-bsflm\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:00 crc kubenswrapper[4796]: I1202 20:37:00.849144 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:01 crc kubenswrapper[4796]: I1202 20:37:01.528406 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2" Dec 02 20:37:01 crc kubenswrapper[4796]: I1202 20:37:01.552059 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2"] Dec 02 20:37:01 crc kubenswrapper[4796]: I1202 20:37:01.557886 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-76b5bc4fb5-v2nj2"] Dec 02 20:37:01 crc kubenswrapper[4796]: I1202 20:37:01.855319 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:37:01 crc kubenswrapper[4796]: I1202 20:37:01.856628 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="proxy-httpd" containerID="cri-o://b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a" gracePeriod=30 Dec 02 20:37:01 crc kubenswrapper[4796]: I1202 20:37:01.856799 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="sg-core" containerID="cri-o://960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6" gracePeriod=30 Dec 02 20:37:01 crc kubenswrapper[4796]: I1202 20:37:01.856870 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="ceilometer-notification-agent" containerID="cri-o://55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf" gracePeriod=30 Dec 02 20:37:01 crc kubenswrapper[4796]: I1202 20:37:01.855925 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="ceilometer-central-agent" containerID="cri-o://d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63" gracePeriod=30 Dec 02 20:37:02 crc kubenswrapper[4796]: I1202 20:37:02.541143 4796 generic.go:334] "Generic (PLEG): container finished" podID="825f702b-3e58-4760-a47b-865cf3080bce" containerID="b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a" exitCode=0 Dec 02 20:37:02 crc kubenswrapper[4796]: I1202 20:37:02.542313 4796 generic.go:334] "Generic (PLEG): container finished" podID="825f702b-3e58-4760-a47b-865cf3080bce" containerID="960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6" exitCode=2 Dec 02 20:37:02 crc kubenswrapper[4796]: I1202 20:37:02.541218 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"825f702b-3e58-4760-a47b-865cf3080bce","Type":"ContainerDied","Data":"b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a"} Dec 02 20:37:02 crc kubenswrapper[4796]: I1202 20:37:02.542442 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"825f702b-3e58-4760-a47b-865cf3080bce","Type":"ContainerDied","Data":"960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6"} Dec 02 20:37:02 crc kubenswrapper[4796]: I1202 20:37:02.542460 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"825f702b-3e58-4760-a47b-865cf3080bce","Type":"ContainerDied","Data":"d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63"} Dec 02 20:37:02 crc kubenswrapper[4796]: I1202 20:37:02.542387 4796 generic.go:334] "Generic (PLEG): container finished" podID="825f702b-3e58-4760-a47b-865cf3080bce" containerID="d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63" exitCode=0 Dec 02 20:37:03 crc kubenswrapper[4796]: I1202 20:37:03.278145 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" path="/var/lib/kubelet/pods/3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac/volumes" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.287856 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.455233 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-config-data\") pod \"825f702b-3e58-4760-a47b-865cf3080bce\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.455297 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-combined-ca-bundle\") pod \"825f702b-3e58-4760-a47b-865cf3080bce\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.455325 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-sg-core-conf-yaml\") pod \"825f702b-3e58-4760-a47b-865cf3080bce\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.455352 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/825f702b-3e58-4760-a47b-865cf3080bce-log-httpd\") pod \"825f702b-3e58-4760-a47b-865cf3080bce\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.455411 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-ceilometer-tls-certs\") pod \"825f702b-3e58-4760-a47b-865cf3080bce\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.455450 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/825f702b-3e58-4760-a47b-865cf3080bce-run-httpd\") pod \"825f702b-3e58-4760-a47b-865cf3080bce\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.455466 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlfhc\" (UniqueName: \"kubernetes.io/projected/825f702b-3e58-4760-a47b-865cf3080bce-kube-api-access-mlfhc\") pod \"825f702b-3e58-4760-a47b-865cf3080bce\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.455506 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-scripts\") pod \"825f702b-3e58-4760-a47b-865cf3080bce\" (UID: \"825f702b-3e58-4760-a47b-865cf3080bce\") " Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.457473 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/825f702b-3e58-4760-a47b-865cf3080bce-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "825f702b-3e58-4760-a47b-865cf3080bce" (UID: "825f702b-3e58-4760-a47b-865cf3080bce"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.458487 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/825f702b-3e58-4760-a47b-865cf3080bce-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "825f702b-3e58-4760-a47b-865cf3080bce" (UID: "825f702b-3e58-4760-a47b-865cf3080bce"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.469576 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825f702b-3e58-4760-a47b-865cf3080bce-kube-api-access-mlfhc" (OuterVolumeSpecName: "kube-api-access-mlfhc") pod "825f702b-3e58-4760-a47b-865cf3080bce" (UID: "825f702b-3e58-4760-a47b-865cf3080bce"). InnerVolumeSpecName "kube-api-access-mlfhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.489410 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-scripts" (OuterVolumeSpecName: "scripts") pod "825f702b-3e58-4760-a47b-865cf3080bce" (UID: "825f702b-3e58-4760-a47b-865cf3080bce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.545397 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "825f702b-3e58-4760-a47b-865cf3080bce" (UID: "825f702b-3e58-4760-a47b-865cf3080bce"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.553181 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "825f702b-3e58-4760-a47b-865cf3080bce" (UID: "825f702b-3e58-4760-a47b-865cf3080bce"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.557341 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.557368 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/825f702b-3e58-4760-a47b-865cf3080bce-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.557376 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.557386 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/825f702b-3e58-4760-a47b-865cf3080bce-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.557394 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlfhc\" (UniqueName: \"kubernetes.io/projected/825f702b-3e58-4760-a47b-865cf3080bce-kube-api-access-mlfhc\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.557403 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.565462 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "825f702b-3e58-4760-a47b-865cf3080bce" (UID: "825f702b-3e58-4760-a47b-865cf3080bce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.588183 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-config-data" (OuterVolumeSpecName: "config-data") pod "825f702b-3e58-4760-a47b-865cf3080bce" (UID: "825f702b-3e58-4760-a47b-865cf3080bce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.596694 4796 generic.go:334] "Generic (PLEG): container finished" podID="825f702b-3e58-4760-a47b-865cf3080bce" containerID="55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf" exitCode=0 Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.596739 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"825f702b-3e58-4760-a47b-865cf3080bce","Type":"ContainerDied","Data":"55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf"} Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.596767 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"825f702b-3e58-4760-a47b-865cf3080bce","Type":"ContainerDied","Data":"3e70c565ba0c6382b8755e911bc304a237508b1b753ff36284d50978e459cee9"} Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.596784 4796 scope.go:117] "RemoveContainer" containerID="b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.596907 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.622800 4796 scope.go:117] "RemoveContainer" containerID="960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.630716 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.644065 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.658481 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.658523 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825f702b-3e58-4760-a47b-865cf3080bce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.661928 4796 scope.go:117] "RemoveContainer" containerID="55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.679607 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:37:06 crc kubenswrapper[4796]: E1202 20:37:06.680143 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="sg-core" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.680172 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="sg-core" Dec 02 20:37:06 crc kubenswrapper[4796]: E1202 20:37:06.680199 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="ceilometer-notification-agent" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.680212 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="ceilometer-notification-agent" Dec 02 20:37:06 crc kubenswrapper[4796]: E1202 20:37:06.680235 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" containerName="keystone-api" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.680248 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" containerName="keystone-api" Dec 02 20:37:06 crc kubenswrapper[4796]: E1202 20:37:06.680451 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="proxy-httpd" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.680466 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="proxy-httpd" Dec 02 20:37:06 crc kubenswrapper[4796]: E1202 20:37:06.680487 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="ceilometer-central-agent" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.680501 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="ceilometer-central-agent" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.680801 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="ceilometer-central-agent" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.680842 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="sg-core" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.680874 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d7a0632-8bd8-46c2-9da9-f90d7e6bd8ac" containerName="keystone-api" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.680890 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="ceilometer-notification-agent" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.680914 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="825f702b-3e58-4760-a47b-865cf3080bce" containerName="proxy-httpd" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.683445 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.686626 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.686844 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.687087 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.693058 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.745723 4796 scope.go:117] "RemoveContainer" containerID="d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.787928 4796 scope.go:117] "RemoveContainer" containerID="b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a" Dec 02 20:37:06 crc kubenswrapper[4796]: E1202 20:37:06.791376 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a\": container with ID starting with b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a not found: ID does not exist" containerID="b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.791408 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a"} err="failed to get container status \"b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a\": rpc error: code = NotFound desc = could not find container \"b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a\": container with ID starting with b13945fcb1675b6c7df2f5cb7bd1e8172a70109c6844619db930227af0a34e2a not found: ID does not exist" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.791433 4796 scope.go:117] "RemoveContainer" containerID="960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6" Dec 02 20:37:06 crc kubenswrapper[4796]: E1202 20:37:06.791892 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6\": container with ID starting with 960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6 not found: ID does not exist" containerID="960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.791945 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6"} err="failed to get container status \"960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6\": rpc error: code = NotFound desc = could not find container \"960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6\": container with ID starting with 960a1df79346d4be03437ffdb3f9eb7c0ddef232d0020e3fec6d34ed0f8ea9d6 not found: ID does not exist" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.791979 4796 scope.go:117] "RemoveContainer" containerID="55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf" Dec 02 20:37:06 crc kubenswrapper[4796]: E1202 20:37:06.793137 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf\": container with ID starting with 55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf not found: ID does not exist" containerID="55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.793168 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf"} err="failed to get container status \"55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf\": rpc error: code = NotFound desc = could not find container \"55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf\": container with ID starting with 55750a1422961720306c434826f2c9455f97fcc65a063218f3c37f2080d383bf not found: ID does not exist" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.793182 4796 scope.go:117] "RemoveContainer" containerID="d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63" Dec 02 20:37:06 crc kubenswrapper[4796]: E1202 20:37:06.793449 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63\": container with ID starting with d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63 not found: ID does not exist" containerID="d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.793477 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63"} err="failed to get container status \"d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63\": rpc error: code = NotFound desc = could not find container \"d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63\": container with ID starting with d92592e37903fcfe6b893d4e6b0936950336ed2f3ddc01723362279958190d63 not found: ID does not exist" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.861752 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.861988 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-config-data\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.862056 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-scripts\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.862162 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdqzc\" (UniqueName: \"kubernetes.io/projected/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-kube-api-access-jdqzc\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.862215 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-log-httpd\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.862241 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.862312 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-run-httpd\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.862345 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.963844 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-run-httpd\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.963900 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.963927 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.963978 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-config-data\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.964001 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-scripts\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.964034 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdqzc\" (UniqueName: \"kubernetes.io/projected/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-kube-api-access-jdqzc\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.964064 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-log-httpd\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.964085 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.965300 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-log-httpd\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.965330 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-run-httpd\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.969961 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.970092 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.970739 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-config-data\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.983907 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-scripts\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.984778 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdqzc\" (UniqueName: \"kubernetes.io/projected/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-kube-api-access-jdqzc\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:06 crc kubenswrapper[4796]: I1202 20:37:06.986426 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:07 crc kubenswrapper[4796]: I1202 20:37:07.054064 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:07 crc kubenswrapper[4796]: I1202 20:37:07.283162 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="825f702b-3e58-4760-a47b-865cf3080bce" path="/var/lib/kubelet/pods/825f702b-3e58-4760-a47b-865cf3080bce/volumes" Dec 02 20:37:07 crc kubenswrapper[4796]: I1202 20:37:07.526770 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:37:07 crc kubenswrapper[4796]: I1202 20:37:07.608299 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad","Type":"ContainerStarted","Data":"ae5a732f1f2bf7e989eda850fd629410f7de861f9cc637c0410a85e1368bea4b"} Dec 02 20:37:08 crc kubenswrapper[4796]: I1202 20:37:08.617130 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad","Type":"ContainerStarted","Data":"9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f"} Dec 02 20:37:09 crc kubenswrapper[4796]: I1202 20:37:09.632992 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad","Type":"ContainerStarted","Data":"f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133"} Dec 02 20:37:10 crc kubenswrapper[4796]: I1202 20:37:10.644397 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad","Type":"ContainerStarted","Data":"7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e"} Dec 02 20:37:11 crc kubenswrapper[4796]: I1202 20:37:11.663486 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad","Type":"ContainerStarted","Data":"9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e"} Dec 02 20:37:11 crc kubenswrapper[4796]: I1202 20:37:11.664002 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:11 crc kubenswrapper[4796]: I1202 20:37:11.713362 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.366718432 podStartE2EDuration="5.713333088s" podCreationTimestamp="2025-12-02 20:37:06 +0000 UTC" firstStartedPulling="2025-12-02 20:37:07.540775096 +0000 UTC m=+1510.544150630" lastFinishedPulling="2025-12-02 20:37:10.887389752 +0000 UTC m=+1513.890765286" observedRunningTime="2025-12-02 20:37:11.708032241 +0000 UTC m=+1514.711407815" watchObservedRunningTime="2025-12-02 20:37:11.713333088 +0000 UTC m=+1514.716708632" Dec 02 20:37:18 crc kubenswrapper[4796]: I1202 20:37:18.132380 4796 scope.go:117] "RemoveContainer" containerID="1a76e7004cdaec1766fbe0db00333c7ac280d9858aee130792765181996528d9" Dec 02 20:37:18 crc kubenswrapper[4796]: I1202 20:37:18.159357 4796 scope.go:117] "RemoveContainer" containerID="7392c85a53b14151a778e3745e06c54f5d8dc44694c60d5a00eb7a1310fb4b95" Dec 02 20:37:18 crc kubenswrapper[4796]: I1202 20:37:18.182619 4796 scope.go:117] "RemoveContainer" containerID="703221751a176783fd13fd0e9d1beaaca81acedaca751b6309a0c67e5efc0a24" Dec 02 20:37:25 crc kubenswrapper[4796]: I1202 20:37:25.189935 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:37:25 crc kubenswrapper[4796]: I1202 20:37:25.190717 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:37:26 crc kubenswrapper[4796]: I1202 20:37:26.787209 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fll8l"] Dec 02 20:37:26 crc kubenswrapper[4796]: I1202 20:37:26.789637 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:26 crc kubenswrapper[4796]: I1202 20:37:26.842770 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fll8l"] Dec 02 20:37:26 crc kubenswrapper[4796]: I1202 20:37:26.844989 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a80e7e4b-45d2-4693-961e-5c9bc35290f8-utilities\") pod \"redhat-marketplace-fll8l\" (UID: \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\") " pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:26 crc kubenswrapper[4796]: I1202 20:37:26.845067 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a80e7e4b-45d2-4693-961e-5c9bc35290f8-catalog-content\") pod \"redhat-marketplace-fll8l\" (UID: \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\") " pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:26 crc kubenswrapper[4796]: I1202 20:37:26.845150 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgt24\" (UniqueName: \"kubernetes.io/projected/a80e7e4b-45d2-4693-961e-5c9bc35290f8-kube-api-access-tgt24\") pod \"redhat-marketplace-fll8l\" (UID: \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\") " pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:26 crc kubenswrapper[4796]: I1202 20:37:26.946893 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a80e7e4b-45d2-4693-961e-5c9bc35290f8-utilities\") pod \"redhat-marketplace-fll8l\" (UID: \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\") " pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:26 crc kubenswrapper[4796]: I1202 20:37:26.946985 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a80e7e4b-45d2-4693-961e-5c9bc35290f8-catalog-content\") pod \"redhat-marketplace-fll8l\" (UID: \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\") " pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:26 crc kubenswrapper[4796]: I1202 20:37:26.947065 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgt24\" (UniqueName: \"kubernetes.io/projected/a80e7e4b-45d2-4693-961e-5c9bc35290f8-kube-api-access-tgt24\") pod \"redhat-marketplace-fll8l\" (UID: \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\") " pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:26 crc kubenswrapper[4796]: I1202 20:37:26.947460 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a80e7e4b-45d2-4693-961e-5c9bc35290f8-utilities\") pod \"redhat-marketplace-fll8l\" (UID: \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\") " pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:26 crc kubenswrapper[4796]: I1202 20:37:26.947537 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a80e7e4b-45d2-4693-961e-5c9bc35290f8-catalog-content\") pod \"redhat-marketplace-fll8l\" (UID: \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\") " pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:26 crc kubenswrapper[4796]: I1202 20:37:26.969634 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgt24\" (UniqueName: \"kubernetes.io/projected/a80e7e4b-45d2-4693-961e-5c9bc35290f8-kube-api-access-tgt24\") pod \"redhat-marketplace-fll8l\" (UID: \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\") " pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:27 crc kubenswrapper[4796]: I1202 20:37:27.125440 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:27 crc kubenswrapper[4796]: I1202 20:37:27.636095 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fll8l"] Dec 02 20:37:27 crc kubenswrapper[4796]: I1202 20:37:27.851586 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fll8l" event={"ID":"a80e7e4b-45d2-4693-961e-5c9bc35290f8","Type":"ContainerStarted","Data":"b87c8ac3114cffa44a171b91a2ed7bf25179abc150ac4464025a88aeff55e89f"} Dec 02 20:37:27 crc kubenswrapper[4796]: I1202 20:37:27.851660 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fll8l" event={"ID":"a80e7e4b-45d2-4693-961e-5c9bc35290f8","Type":"ContainerStarted","Data":"c0541886d18231659fa3ac5b9eb4f44967d8042f923b511510dec0386d2e4542"} Dec 02 20:37:28 crc kubenswrapper[4796]: I1202 20:37:28.860866 4796 generic.go:334] "Generic (PLEG): container finished" podID="a80e7e4b-45d2-4693-961e-5c9bc35290f8" containerID="b87c8ac3114cffa44a171b91a2ed7bf25179abc150ac4464025a88aeff55e89f" exitCode=0 Dec 02 20:37:28 crc kubenswrapper[4796]: I1202 20:37:28.860913 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fll8l" event={"ID":"a80e7e4b-45d2-4693-961e-5c9bc35290f8","Type":"ContainerDied","Data":"b87c8ac3114cffa44a171b91a2ed7bf25179abc150ac4464025a88aeff55e89f"} Dec 02 20:37:29 crc kubenswrapper[4796]: I1202 20:37:29.872602 4796 generic.go:334] "Generic (PLEG): container finished" podID="a80e7e4b-45d2-4693-961e-5c9bc35290f8" containerID="2dd9c096369ab0ba67f851f41b9dfdc9b4a357b42f7d0c66dd20ebe92f678f15" exitCode=0 Dec 02 20:37:29 crc kubenswrapper[4796]: I1202 20:37:29.872695 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fll8l" event={"ID":"a80e7e4b-45d2-4693-961e-5c9bc35290f8","Type":"ContainerDied","Data":"2dd9c096369ab0ba67f851f41b9dfdc9b4a357b42f7d0c66dd20ebe92f678f15"} Dec 02 20:37:30 crc kubenswrapper[4796]: I1202 20:37:30.909625 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fll8l" event={"ID":"a80e7e4b-45d2-4693-961e-5c9bc35290f8","Type":"ContainerStarted","Data":"9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4"} Dec 02 20:37:30 crc kubenswrapper[4796]: I1202 20:37:30.942846 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fll8l" podStartSLOduration=3.212562991 podStartE2EDuration="4.942823417s" podCreationTimestamp="2025-12-02 20:37:26 +0000 UTC" firstStartedPulling="2025-12-02 20:37:28.862916295 +0000 UTC m=+1531.866291829" lastFinishedPulling="2025-12-02 20:37:30.593176711 +0000 UTC m=+1533.596552255" observedRunningTime="2025-12-02 20:37:30.933012852 +0000 UTC m=+1533.936388386" watchObservedRunningTime="2025-12-02 20:37:30.942823417 +0000 UTC m=+1533.946198951" Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.647366 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pvj29"] Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.656198 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pvj29"] Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.716867 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.717434 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="8d488a05-dbc8-49d8-921b-7fe2b03a8eef" containerName="watcher-decision-engine" containerID="cri-o://47e8efee9dd04fbd6799794418af3768051d531e7be8185a0af5546b8dbdd82a" gracePeriod=30 Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.728090 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherbe33-account-delete-p6ms2"] Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.729729 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.738735 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.739026 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="1b17e49e-2ba6-420e-913c-841bcf939fae" containerName="watcher-applier" containerID="cri-o://25f9d883bfe27f5baa25f86181b23b7a35da24f5f9c82fe960faf1a1eefb32ea" gracePeriod=30 Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.759308 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherbe33-account-delete-p6ms2"] Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.807095 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.807405 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="afcd4408-b152-47e6-9c31-477cc4dcb04e" containerName="watcher-kuttl-api-log" containerID="cri-o://a0d660d6923d2d9de1bcb4bceb31465e953b2e5bc292d4035e3af86e7f508e0e" gracePeriod=30 Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.807573 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="afcd4408-b152-47e6-9c31-477cc4dcb04e" containerName="watcher-api" containerID="cri-o://bc13b842818936536cc2335ac9d7d0e926da1a247f79d2e5bc2e3e0aa3a6de08" gracePeriod=30 Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.844564 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4028ab5e-8906-45fd-abd6-0954db71014e-operator-scripts\") pod \"watcherbe33-account-delete-p6ms2\" (UID: \"4028ab5e-8906-45fd-abd6-0954db71014e\") " pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.844626 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8kd5\" (UniqueName: \"kubernetes.io/projected/4028ab5e-8906-45fd-abd6-0954db71014e-kube-api-access-x8kd5\") pod \"watcherbe33-account-delete-p6ms2\" (UID: \"4028ab5e-8906-45fd-abd6-0954db71014e\") " pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.945906 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8kd5\" (UniqueName: \"kubernetes.io/projected/4028ab5e-8906-45fd-abd6-0954db71014e-kube-api-access-x8kd5\") pod \"watcherbe33-account-delete-p6ms2\" (UID: \"4028ab5e-8906-45fd-abd6-0954db71014e\") " pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.946114 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4028ab5e-8906-45fd-abd6-0954db71014e-operator-scripts\") pod \"watcherbe33-account-delete-p6ms2\" (UID: \"4028ab5e-8906-45fd-abd6-0954db71014e\") " pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.947180 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4028ab5e-8906-45fd-abd6-0954db71014e-operator-scripts\") pod \"watcherbe33-account-delete-p6ms2\" (UID: \"4028ab5e-8906-45fd-abd6-0954db71014e\") " pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" Dec 02 20:37:36 crc kubenswrapper[4796]: I1202 20:37:36.968924 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8kd5\" (UniqueName: \"kubernetes.io/projected/4028ab5e-8906-45fd-abd6-0954db71014e-kube-api-access-x8kd5\") pod \"watcherbe33-account-delete-p6ms2\" (UID: \"4028ab5e-8906-45fd-abd6-0954db71014e\") " pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" Dec 02 20:37:37 crc kubenswrapper[4796]: I1202 20:37:37.049157 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" Dec 02 20:37:37 crc kubenswrapper[4796]: I1202 20:37:37.063979 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:37 crc kubenswrapper[4796]: I1202 20:37:37.125640 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:37 crc kubenswrapper[4796]: I1202 20:37:37.126823 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:37 crc kubenswrapper[4796]: I1202 20:37:37.231082 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:37 crc kubenswrapper[4796]: I1202 20:37:37.311197 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f678676b-e117-40a0-a5fa-7038047e75d5" path="/var/lib/kubelet/pods/f678676b-e117-40a0-a5fa-7038047e75d5/volumes" Dec 02 20:37:37 crc kubenswrapper[4796]: I1202 20:37:37.622129 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherbe33-account-delete-p6ms2"] Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.017616 4796 generic.go:334] "Generic (PLEG): container finished" podID="afcd4408-b152-47e6-9c31-477cc4dcb04e" containerID="bc13b842818936536cc2335ac9d7d0e926da1a247f79d2e5bc2e3e0aa3a6de08" exitCode=0 Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.017933 4796 generic.go:334] "Generic (PLEG): container finished" podID="afcd4408-b152-47e6-9c31-477cc4dcb04e" containerID="a0d660d6923d2d9de1bcb4bceb31465e953b2e5bc292d4035e3af86e7f508e0e" exitCode=143 Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.017991 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"afcd4408-b152-47e6-9c31-477cc4dcb04e","Type":"ContainerDied","Data":"bc13b842818936536cc2335ac9d7d0e926da1a247f79d2e5bc2e3e0aa3a6de08"} Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.018020 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"afcd4408-b152-47e6-9c31-477cc4dcb04e","Type":"ContainerDied","Data":"a0d660d6923d2d9de1bcb4bceb31465e953b2e5bc292d4035e3af86e7f508e0e"} Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.029338 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" event={"ID":"4028ab5e-8906-45fd-abd6-0954db71014e","Type":"ContainerStarted","Data":"da074b9059731073a72850b867cd559313c3a66b7ff001cef09652c1ca6b81eb"} Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.112633 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.355378 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.403781 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-combined-ca-bundle\") pod \"afcd4408-b152-47e6-9c31-477cc4dcb04e\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.403882 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-cert-memcached-mtls\") pod \"afcd4408-b152-47e6-9c31-477cc4dcb04e\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.404042 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-custom-prometheus-ca\") pod \"afcd4408-b152-47e6-9c31-477cc4dcb04e\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.404142 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afcd4408-b152-47e6-9c31-477cc4dcb04e-logs\") pod \"afcd4408-b152-47e6-9c31-477cc4dcb04e\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.404211 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-config-data\") pod \"afcd4408-b152-47e6-9c31-477cc4dcb04e\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.404270 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tvh9\" (UniqueName: \"kubernetes.io/projected/afcd4408-b152-47e6-9c31-477cc4dcb04e-kube-api-access-9tvh9\") pod \"afcd4408-b152-47e6-9c31-477cc4dcb04e\" (UID: \"afcd4408-b152-47e6-9c31-477cc4dcb04e\") " Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.408668 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afcd4408-b152-47e6-9c31-477cc4dcb04e-logs" (OuterVolumeSpecName: "logs") pod "afcd4408-b152-47e6-9c31-477cc4dcb04e" (UID: "afcd4408-b152-47e6-9c31-477cc4dcb04e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.418441 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afcd4408-b152-47e6-9c31-477cc4dcb04e-kube-api-access-9tvh9" (OuterVolumeSpecName: "kube-api-access-9tvh9") pod "afcd4408-b152-47e6-9c31-477cc4dcb04e" (UID: "afcd4408-b152-47e6-9c31-477cc4dcb04e"). InnerVolumeSpecName "kube-api-access-9tvh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.447003 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afcd4408-b152-47e6-9c31-477cc4dcb04e" (UID: "afcd4408-b152-47e6-9c31-477cc4dcb04e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.467763 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "afcd4408-b152-47e6-9c31-477cc4dcb04e" (UID: "afcd4408-b152-47e6-9c31-477cc4dcb04e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.489809 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-config-data" (OuterVolumeSpecName: "config-data") pod "afcd4408-b152-47e6-9c31-477cc4dcb04e" (UID: "afcd4408-b152-47e6-9c31-477cc4dcb04e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.506498 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.506541 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.506553 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afcd4408-b152-47e6-9c31-477cc4dcb04e-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.506568 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.506577 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tvh9\" (UniqueName: \"kubernetes.io/projected/afcd4408-b152-47e6-9c31-477cc4dcb04e-kube-api-access-9tvh9\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.514335 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "afcd4408-b152-47e6-9c31-477cc4dcb04e" (UID: "afcd4408-b152-47e6-9c31-477cc4dcb04e"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:38 crc kubenswrapper[4796]: I1202 20:37:38.608592 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/afcd4408-b152-47e6-9c31-477cc4dcb04e-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.041054 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"afcd4408-b152-47e6-9c31-477cc4dcb04e","Type":"ContainerDied","Data":"344d09f3580528029c0556f8f337a3ce1a7b43706454852949f3d9d4c4ab29e8"} Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.041101 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.041111 4796 scope.go:117] "RemoveContainer" containerID="bc13b842818936536cc2335ac9d7d0e926da1a247f79d2e5bc2e3e0aa3a6de08" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.044937 4796 generic.go:334] "Generic (PLEG): container finished" podID="4028ab5e-8906-45fd-abd6-0954db71014e" containerID="b7997d961aafdc2d72e2e27bb86c9af7c7035cf6a7dbc35fd9582318f5a98b73" exitCode=0 Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.045028 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" event={"ID":"4028ab5e-8906-45fd-abd6-0954db71014e","Type":"ContainerDied","Data":"b7997d961aafdc2d72e2e27bb86c9af7c7035cf6a7dbc35fd9582318f5a98b73"} Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.064936 4796 generic.go:334] "Generic (PLEG): container finished" podID="1b17e49e-2ba6-420e-913c-841bcf939fae" containerID="25f9d883bfe27f5baa25f86181b23b7a35da24f5f9c82fe960faf1a1eefb32ea" exitCode=0 Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.065539 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1b17e49e-2ba6-420e-913c-841bcf939fae","Type":"ContainerDied","Data":"25f9d883bfe27f5baa25f86181b23b7a35da24f5f9c82fe960faf1a1eefb32ea"} Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.093130 4796 scope.go:117] "RemoveContainer" containerID="a0d660d6923d2d9de1bcb4bceb31465e953b2e5bc292d4035e3af86e7f508e0e" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.106722 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.123171 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.236855 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.286701 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afcd4408-b152-47e6-9c31-477cc4dcb04e" path="/var/lib/kubelet/pods/afcd4408-b152-47e6-9c31-477cc4dcb04e/volumes" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.321247 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-config-data\") pod \"1b17e49e-2ba6-420e-913c-841bcf939fae\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.321400 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-combined-ca-bundle\") pod \"1b17e49e-2ba6-420e-913c-841bcf939fae\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.321449 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-cert-memcached-mtls\") pod \"1b17e49e-2ba6-420e-913c-841bcf939fae\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.321747 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b17e49e-2ba6-420e-913c-841bcf939fae-logs\") pod \"1b17e49e-2ba6-420e-913c-841bcf939fae\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.321797 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6n9j\" (UniqueName: \"kubernetes.io/projected/1b17e49e-2ba6-420e-913c-841bcf939fae-kube-api-access-q6n9j\") pod \"1b17e49e-2ba6-420e-913c-841bcf939fae\" (UID: \"1b17e49e-2ba6-420e-913c-841bcf939fae\") " Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.322093 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b17e49e-2ba6-420e-913c-841bcf939fae-logs" (OuterVolumeSpecName: "logs") pod "1b17e49e-2ba6-420e-913c-841bcf939fae" (UID: "1b17e49e-2ba6-420e-913c-841bcf939fae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.322307 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b17e49e-2ba6-420e-913c-841bcf939fae-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.325854 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b17e49e-2ba6-420e-913c-841bcf939fae-kube-api-access-q6n9j" (OuterVolumeSpecName: "kube-api-access-q6n9j") pod "1b17e49e-2ba6-420e-913c-841bcf939fae" (UID: "1b17e49e-2ba6-420e-913c-841bcf939fae"). InnerVolumeSpecName "kube-api-access-q6n9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.380646 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b17e49e-2ba6-420e-913c-841bcf939fae" (UID: "1b17e49e-2ba6-420e-913c-841bcf939fae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.392067 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-config-data" (OuterVolumeSpecName: "config-data") pod "1b17e49e-2ba6-420e-913c-841bcf939fae" (UID: "1b17e49e-2ba6-420e-913c-841bcf939fae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.419037 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1b17e49e-2ba6-420e-913c-841bcf939fae" (UID: "1b17e49e-2ba6-420e-913c-841bcf939fae"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.423831 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.423901 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.423914 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6n9j\" (UniqueName: \"kubernetes.io/projected/1b17e49e-2ba6-420e-913c-841bcf939fae-kube-api-access-q6n9j\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.423927 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b17e49e-2ba6-420e-913c-841bcf939fae-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.722131 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.722559 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="sg-core" containerID="cri-o://7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e" gracePeriod=30 Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.722601 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="ceilometer-notification-agent" containerID="cri-o://f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133" gracePeriod=30 Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.722612 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="proxy-httpd" containerID="cri-o://9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e" gracePeriod=30 Dec 02 20:37:39 crc kubenswrapper[4796]: I1202 20:37:39.722783 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="ceilometer-central-agent" containerID="cri-o://9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f" gracePeriod=30 Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.073531 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1b17e49e-2ba6-420e-913c-841bcf939fae","Type":"ContainerDied","Data":"615333164dc197ca4bca4513324d214fecc77c606e2631e86be4adfd17ecb686"} Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.073847 4796 scope.go:117] "RemoveContainer" containerID="25f9d883bfe27f5baa25f86181b23b7a35da24f5f9c82fe960faf1a1eefb32ea" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.073971 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.081274 4796 generic.go:334] "Generic (PLEG): container finished" podID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerID="9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e" exitCode=0 Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.081304 4796 generic.go:334] "Generic (PLEG): container finished" podID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerID="7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e" exitCode=2 Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.081369 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad","Type":"ContainerDied","Data":"9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e"} Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.081423 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad","Type":"ContainerDied","Data":"7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e"} Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.110757 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.120981 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.467915 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.546172 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4028ab5e-8906-45fd-abd6-0954db71014e-operator-scripts\") pod \"4028ab5e-8906-45fd-abd6-0954db71014e\" (UID: \"4028ab5e-8906-45fd-abd6-0954db71014e\") " Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.546245 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8kd5\" (UniqueName: \"kubernetes.io/projected/4028ab5e-8906-45fd-abd6-0954db71014e-kube-api-access-x8kd5\") pod \"4028ab5e-8906-45fd-abd6-0954db71014e\" (UID: \"4028ab5e-8906-45fd-abd6-0954db71014e\") " Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.547880 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4028ab5e-8906-45fd-abd6-0954db71014e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4028ab5e-8906-45fd-abd6-0954db71014e" (UID: "4028ab5e-8906-45fd-abd6-0954db71014e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.553808 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4028ab5e-8906-45fd-abd6-0954db71014e-kube-api-access-x8kd5" (OuterVolumeSpecName: "kube-api-access-x8kd5") pod "4028ab5e-8906-45fd-abd6-0954db71014e" (UID: "4028ab5e-8906-45fd-abd6-0954db71014e"). InnerVolumeSpecName "kube-api-access-x8kd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.609619 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.647515 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-log-httpd\") pod \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.647610 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-combined-ca-bundle\") pod \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.647705 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdqzc\" (UniqueName: \"kubernetes.io/projected/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-kube-api-access-jdqzc\") pod \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.647770 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-config-data\") pod \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.647840 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-run-httpd\") pod \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.647879 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-ceilometer-tls-certs\") pod \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.647915 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-scripts\") pod \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.647941 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-sg-core-conf-yaml\") pod \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\" (UID: \"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad\") " Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.648019 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" (UID: "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.648414 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4028ab5e-8906-45fd-abd6-0954db71014e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.648442 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8kd5\" (UniqueName: \"kubernetes.io/projected/4028ab5e-8906-45fd-abd6-0954db71014e-kube-api-access-x8kd5\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.648458 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.652120 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" (UID: "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.652952 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-kube-api-access-jdqzc" (OuterVolumeSpecName: "kube-api-access-jdqzc") pod "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" (UID: "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad"). InnerVolumeSpecName "kube-api-access-jdqzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.654838 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-scripts" (OuterVolumeSpecName: "scripts") pod "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" (UID: "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.710347 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" (UID: "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.710738 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" (UID: "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.752450 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdqzc\" (UniqueName: \"kubernetes.io/projected/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-kube-api-access-jdqzc\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.752694 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.752756 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.752821 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.752925 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.757068 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" (UID: "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.776198 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fll8l"] Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.799015 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-config-data" (OuterVolumeSpecName: "config-data") pod "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" (UID: "aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.854823 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:40 crc kubenswrapper[4796]: I1202 20:37:40.854875 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.133281 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" event={"ID":"4028ab5e-8906-45fd-abd6-0954db71014e","Type":"ContainerDied","Data":"da074b9059731073a72850b867cd559313c3a66b7ff001cef09652c1ca6b81eb"} Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.133343 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da074b9059731073a72850b867cd559313c3a66b7ff001cef09652c1ca6b81eb" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.133453 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherbe33-account-delete-p6ms2" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.139508 4796 generic.go:334] "Generic (PLEG): container finished" podID="8d488a05-dbc8-49d8-921b-7fe2b03a8eef" containerID="47e8efee9dd04fbd6799794418af3768051d531e7be8185a0af5546b8dbdd82a" exitCode=0 Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.139675 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8d488a05-dbc8-49d8-921b-7fe2b03a8eef","Type":"ContainerDied","Data":"47e8efee9dd04fbd6799794418af3768051d531e7be8185a0af5546b8dbdd82a"} Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.150729 4796 generic.go:334] "Generic (PLEG): container finished" podID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerID="f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133" exitCode=0 Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.150770 4796 generic.go:334] "Generic (PLEG): container finished" podID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerID="9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f" exitCode=0 Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.150988 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad","Type":"ContainerDied","Data":"f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133"} Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.151045 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad","Type":"ContainerDied","Data":"9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f"} Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.151059 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad","Type":"ContainerDied","Data":"ae5a732f1f2bf7e989eda850fd629410f7de861f9cc637c0410a85e1368bea4b"} Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.151086 4796 scope.go:117] "RemoveContainer" containerID="9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.151562 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fll8l" podUID="a80e7e4b-45d2-4693-961e-5c9bc35290f8" containerName="registry-server" containerID="cri-o://9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4" gracePeriod=2 Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.151755 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.255849 4796 scope.go:117] "RemoveContainer" containerID="7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.323364 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b17e49e-2ba6-420e-913c-841bcf939fae" path="/var/lib/kubelet/pods/1b17e49e-2ba6-420e-913c-841bcf939fae/volumes" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.323915 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.323962 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.324687 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:37:41 crc kubenswrapper[4796]: E1202 20:37:41.324927 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b17e49e-2ba6-420e-913c-841bcf939fae" containerName="watcher-applier" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.324940 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b17e49e-2ba6-420e-913c-841bcf939fae" containerName="watcher-applier" Dec 02 20:37:41 crc kubenswrapper[4796]: E1202 20:37:41.324954 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="ceilometer-central-agent" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.324961 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="ceilometer-central-agent" Dec 02 20:37:41 crc kubenswrapper[4796]: E1202 20:37:41.324981 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="sg-core" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.324987 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="sg-core" Dec 02 20:37:41 crc kubenswrapper[4796]: E1202 20:37:41.325001 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="proxy-httpd" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325008 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="proxy-httpd" Dec 02 20:37:41 crc kubenswrapper[4796]: E1202 20:37:41.325019 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afcd4408-b152-47e6-9c31-477cc4dcb04e" containerName="watcher-kuttl-api-log" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325025 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="afcd4408-b152-47e6-9c31-477cc4dcb04e" containerName="watcher-kuttl-api-log" Dec 02 20:37:41 crc kubenswrapper[4796]: E1202 20:37:41.325036 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="ceilometer-notification-agent" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325042 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="ceilometer-notification-agent" Dec 02 20:37:41 crc kubenswrapper[4796]: E1202 20:37:41.325064 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4028ab5e-8906-45fd-abd6-0954db71014e" containerName="mariadb-account-delete" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325071 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4028ab5e-8906-45fd-abd6-0954db71014e" containerName="mariadb-account-delete" Dec 02 20:37:41 crc kubenswrapper[4796]: E1202 20:37:41.325084 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afcd4408-b152-47e6-9c31-477cc4dcb04e" containerName="watcher-api" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325091 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="afcd4408-b152-47e6-9c31-477cc4dcb04e" containerName="watcher-api" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325239 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="ceilometer-central-agent" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325265 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="afcd4408-b152-47e6-9c31-477cc4dcb04e" containerName="watcher-api" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325272 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4028ab5e-8906-45fd-abd6-0954db71014e" containerName="mariadb-account-delete" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325282 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="proxy-httpd" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325294 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b17e49e-2ba6-420e-913c-841bcf939fae" containerName="watcher-applier" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325305 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="ceilometer-notification-agent" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325316 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" containerName="sg-core" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.325328 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="afcd4408-b152-47e6-9c31-477cc4dcb04e" containerName="watcher-kuttl-api-log" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.331477 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.331953 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.339564 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.340575 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.340746 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.371045 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.371119 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.371137 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55690485-e94e-48e1-9a93-30d0167cb17d-log-httpd\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.371153 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-scripts\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.371176 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-config-data\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.371209 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55690485-e94e-48e1-9a93-30d0167cb17d-run-httpd\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.371228 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drl9l\" (UniqueName: \"kubernetes.io/projected/55690485-e94e-48e1-9a93-30d0167cb17d-kube-api-access-drl9l\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.371321 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.442491 4796 scope.go:117] "RemoveContainer" containerID="f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.473999 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55690485-e94e-48e1-9a93-30d0167cb17d-run-httpd\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.474571 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drl9l\" (UniqueName: \"kubernetes.io/projected/55690485-e94e-48e1-9a93-30d0167cb17d-kube-api-access-drl9l\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.474678 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55690485-e94e-48e1-9a93-30d0167cb17d-run-httpd\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.474734 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.474985 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.475132 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.475275 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55690485-e94e-48e1-9a93-30d0167cb17d-log-httpd\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.475304 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-scripts\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.475472 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-config-data\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.477436 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55690485-e94e-48e1-9a93-30d0167cb17d-log-httpd\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.485976 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-config-data\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.486693 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.486868 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.490086 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.491501 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-scripts\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.508052 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drl9l\" (UniqueName: \"kubernetes.io/projected/55690485-e94e-48e1-9a93-30d0167cb17d-kube-api-access-drl9l\") pod \"ceilometer-0\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.584695 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.585933 4796 scope.go:117] "RemoveContainer" containerID="9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.624023 4796 scope.go:117] "RemoveContainer" containerID="9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e" Dec 02 20:37:41 crc kubenswrapper[4796]: E1202 20:37:41.624471 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e\": container with ID starting with 9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e not found: ID does not exist" containerID="9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.624502 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e"} err="failed to get container status \"9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e\": rpc error: code = NotFound desc = could not find container \"9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e\": container with ID starting with 9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e not found: ID does not exist" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.624526 4796 scope.go:117] "RemoveContainer" containerID="7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e" Dec 02 20:37:41 crc kubenswrapper[4796]: E1202 20:37:41.624722 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e\": container with ID starting with 7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e not found: ID does not exist" containerID="7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.624738 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e"} err="failed to get container status \"7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e\": rpc error: code = NotFound desc = could not find container \"7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e\": container with ID starting with 7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e not found: ID does not exist" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.624751 4796 scope.go:117] "RemoveContainer" containerID="f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133" Dec 02 20:37:41 crc kubenswrapper[4796]: E1202 20:37:41.624914 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133\": container with ID starting with f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133 not found: ID does not exist" containerID="f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.624930 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133"} err="failed to get container status \"f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133\": rpc error: code = NotFound desc = could not find container \"f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133\": container with ID starting with f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133 not found: ID does not exist" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.624942 4796 scope.go:117] "RemoveContainer" containerID="9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f" Dec 02 20:37:41 crc kubenswrapper[4796]: E1202 20:37:41.625083 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f\": container with ID starting with 9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f not found: ID does not exist" containerID="9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.625101 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f"} err="failed to get container status \"9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f\": rpc error: code = NotFound desc = could not find container \"9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f\": container with ID starting with 9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f not found: ID does not exist" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.625113 4796 scope.go:117] "RemoveContainer" containerID="9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.625285 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e"} err="failed to get container status \"9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e\": rpc error: code = NotFound desc = could not find container \"9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e\": container with ID starting with 9d4dff48935d6fc7db8db69ee5b847f99c3d41a88edc3341aa2ba3b027c3056e not found: ID does not exist" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.625298 4796 scope.go:117] "RemoveContainer" containerID="7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.625445 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e"} err="failed to get container status \"7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e\": rpc error: code = NotFound desc = could not find container \"7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e\": container with ID starting with 7a6b02a6b40bb22cb17f73cc4d96c94e4ca24c3d81704a2dbe94f043a792e96e not found: ID does not exist" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.625460 4796 scope.go:117] "RemoveContainer" containerID="f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.625618 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133"} err="failed to get container status \"f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133\": rpc error: code = NotFound desc = could not find container \"f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133\": container with ID starting with f21e1969dd9d01e8041370fa63228288d68d54b1723d79730e391278236bf133 not found: ID does not exist" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.625632 4796 scope.go:117] "RemoveContainer" containerID="9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.625792 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f"} err="failed to get container status \"9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f\": rpc error: code = NotFound desc = could not find container \"9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f\": container with ID starting with 9fd2c3f1a9c49f3d793fec2949952e5b5589f5655307693388366c14f839460f not found: ID does not exist" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.678510 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-combined-ca-bundle\") pod \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.678688 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcxn6\" (UniqueName: \"kubernetes.io/projected/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-kube-api-access-kcxn6\") pod \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.678788 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-logs\") pod \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.678823 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-custom-prometheus-ca\") pod \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.678842 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-cert-memcached-mtls\") pod \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.678865 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-config-data\") pod \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\" (UID: \"8d488a05-dbc8-49d8-921b-7fe2b03a8eef\") " Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.679445 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-logs" (OuterVolumeSpecName: "logs") pod "8d488a05-dbc8-49d8-921b-7fe2b03a8eef" (UID: "8d488a05-dbc8-49d8-921b-7fe2b03a8eef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.683377 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.683484 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-kube-api-access-kcxn6" (OuterVolumeSpecName: "kube-api-access-kcxn6") pod "8d488a05-dbc8-49d8-921b-7fe2b03a8eef" (UID: "8d488a05-dbc8-49d8-921b-7fe2b03a8eef"). InnerVolumeSpecName "kube-api-access-kcxn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.722966 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d488a05-dbc8-49d8-921b-7fe2b03a8eef" (UID: "8d488a05-dbc8-49d8-921b-7fe2b03a8eef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.728193 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8d488a05-dbc8-49d8-921b-7fe2b03a8eef" (UID: "8d488a05-dbc8-49d8-921b-7fe2b03a8eef"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.781560 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-config-data" (OuterVolumeSpecName: "config-data") pod "8d488a05-dbc8-49d8-921b-7fe2b03a8eef" (UID: "8d488a05-dbc8-49d8-921b-7fe2b03a8eef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.783603 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.784082 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcxn6\" (UniqueName: \"kubernetes.io/projected/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-kube-api-access-kcxn6\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.784096 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.784106 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.784115 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.789385 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-4kmtk"] Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.799908 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-4kmtk"] Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.813911 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherbe33-account-delete-p6ms2"] Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.820565 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-be33-account-create-update-flhzh"] Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.825802 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherbe33-account-delete-p6ms2"] Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.834314 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-be33-account-create-update-flhzh"] Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.839625 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "8d488a05-dbc8-49d8-921b-7fe2b03a8eef" (UID: "8d488a05-dbc8-49d8-921b-7fe2b03a8eef"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.875424 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:41 crc kubenswrapper[4796]: I1202 20:37:41.887549 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8d488a05-dbc8-49d8-921b-7fe2b03a8eef-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:41.989666 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a80e7e4b-45d2-4693-961e-5c9bc35290f8-utilities\") pod \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\" (UID: \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\") " Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:41.989946 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgt24\" (UniqueName: \"kubernetes.io/projected/a80e7e4b-45d2-4693-961e-5c9bc35290f8-kube-api-access-tgt24\") pod \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\" (UID: \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\") " Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:41.989983 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a80e7e4b-45d2-4693-961e-5c9bc35290f8-catalog-content\") pod \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\" (UID: \"a80e7e4b-45d2-4693-961e-5c9bc35290f8\") " Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:41.991677 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a80e7e4b-45d2-4693-961e-5c9bc35290f8-utilities" (OuterVolumeSpecName: "utilities") pod "a80e7e4b-45d2-4693-961e-5c9bc35290f8" (UID: "a80e7e4b-45d2-4693-961e-5c9bc35290f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:41.999436 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a80e7e4b-45d2-4693-961e-5c9bc35290f8-kube-api-access-tgt24" (OuterVolumeSpecName: "kube-api-access-tgt24") pod "a80e7e4b-45d2-4693-961e-5c9bc35290f8" (UID: "a80e7e4b-45d2-4693-961e-5c9bc35290f8"). InnerVolumeSpecName "kube-api-access-tgt24". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.013572 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a80e7e4b-45d2-4693-961e-5c9bc35290f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a80e7e4b-45d2-4693-961e-5c9bc35290f8" (UID: "a80e7e4b-45d2-4693-961e-5c9bc35290f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.100302 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgt24\" (UniqueName: \"kubernetes.io/projected/a80e7e4b-45d2-4693-961e-5c9bc35290f8-kube-api-access-tgt24\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.100338 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a80e7e4b-45d2-4693-961e-5c9bc35290f8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.100350 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a80e7e4b-45d2-4693-961e-5c9bc35290f8-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.166588 4796 generic.go:334] "Generic (PLEG): container finished" podID="a80e7e4b-45d2-4693-961e-5c9bc35290f8" containerID="9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4" exitCode=0 Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.166706 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fll8l" event={"ID":"a80e7e4b-45d2-4693-961e-5c9bc35290f8","Type":"ContainerDied","Data":"9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4"} Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.166720 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fll8l" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.166768 4796 scope.go:117] "RemoveContainer" containerID="9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.166749 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fll8l" event={"ID":"a80e7e4b-45d2-4693-961e-5c9bc35290f8","Type":"ContainerDied","Data":"c0541886d18231659fa3ac5b9eb4f44967d8042f923b511510dec0386d2e4542"} Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.170667 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8d488a05-dbc8-49d8-921b-7fe2b03a8eef","Type":"ContainerDied","Data":"c9601c296e41c765cb3c5f2648e57c6a8c8c07de93b88d775b1df9b50e98cd35"} Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.170752 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.198647 4796 scope.go:117] "RemoveContainer" containerID="2dd9c096369ab0ba67f851f41b9dfdc9b4a357b42f7d0c66dd20ebe92f678f15" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.224371 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fll8l"] Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.237873 4796 scope.go:117] "RemoveContainer" containerID="b87c8ac3114cffa44a171b91a2ed7bf25179abc150ac4464025a88aeff55e89f" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.244472 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fll8l"] Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.254979 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.262342 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.284239 4796 scope.go:117] "RemoveContainer" containerID="9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4" Dec 02 20:37:42 crc kubenswrapper[4796]: E1202 20:37:42.284748 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4\": container with ID starting with 9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4 not found: ID does not exist" containerID="9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.284803 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4"} err="failed to get container status \"9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4\": rpc error: code = NotFound desc = could not find container \"9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4\": container with ID starting with 9a581e00b0e688006db6187f3cbc5e911519e5fd8c9302553d46e6a194667bd4 not found: ID does not exist" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.284849 4796 scope.go:117] "RemoveContainer" containerID="2dd9c096369ab0ba67f851f41b9dfdc9b4a357b42f7d0c66dd20ebe92f678f15" Dec 02 20:37:42 crc kubenswrapper[4796]: E1202 20:37:42.285192 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dd9c096369ab0ba67f851f41b9dfdc9b4a357b42f7d0c66dd20ebe92f678f15\": container with ID starting with 2dd9c096369ab0ba67f851f41b9dfdc9b4a357b42f7d0c66dd20ebe92f678f15 not found: ID does not exist" containerID="2dd9c096369ab0ba67f851f41b9dfdc9b4a357b42f7d0c66dd20ebe92f678f15" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.285244 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dd9c096369ab0ba67f851f41b9dfdc9b4a357b42f7d0c66dd20ebe92f678f15"} err="failed to get container status \"2dd9c096369ab0ba67f851f41b9dfdc9b4a357b42f7d0c66dd20ebe92f678f15\": rpc error: code = NotFound desc = could not find container \"2dd9c096369ab0ba67f851f41b9dfdc9b4a357b42f7d0c66dd20ebe92f678f15\": container with ID starting with 2dd9c096369ab0ba67f851f41b9dfdc9b4a357b42f7d0c66dd20ebe92f678f15 not found: ID does not exist" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.285300 4796 scope.go:117] "RemoveContainer" containerID="b87c8ac3114cffa44a171b91a2ed7bf25179abc150ac4464025a88aeff55e89f" Dec 02 20:37:42 crc kubenswrapper[4796]: E1202 20:37:42.285979 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b87c8ac3114cffa44a171b91a2ed7bf25179abc150ac4464025a88aeff55e89f\": container with ID starting with b87c8ac3114cffa44a171b91a2ed7bf25179abc150ac4464025a88aeff55e89f not found: ID does not exist" containerID="b87c8ac3114cffa44a171b91a2ed7bf25179abc150ac4464025a88aeff55e89f" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.286025 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b87c8ac3114cffa44a171b91a2ed7bf25179abc150ac4464025a88aeff55e89f"} err="failed to get container status \"b87c8ac3114cffa44a171b91a2ed7bf25179abc150ac4464025a88aeff55e89f\": rpc error: code = NotFound desc = could not find container \"b87c8ac3114cffa44a171b91a2ed7bf25179abc150ac4464025a88aeff55e89f\": container with ID starting with b87c8ac3114cffa44a171b91a2ed7bf25179abc150ac4464025a88aeff55e89f not found: ID does not exist" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.286058 4796 scope.go:117] "RemoveContainer" containerID="47e8efee9dd04fbd6799794418af3768051d531e7be8185a0af5546b8dbdd82a" Dec 02 20:37:42 crc kubenswrapper[4796]: I1202 20:37:42.310359 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.180418 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55690485-e94e-48e1-9a93-30d0167cb17d","Type":"ContainerStarted","Data":"97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7"} Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.180743 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55690485-e94e-48e1-9a93-30d0167cb17d","Type":"ContainerStarted","Data":"9fd1493dfceea65cf863a1227b0b489bf41d2239a1abf21257b2518445e7b0c3"} Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.278865 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e3f97d0-7913-47ba-a3a8-e48c0c941aec" path="/var/lib/kubelet/pods/3e3f97d0-7913-47ba-a3a8-e48c0c941aec/volumes" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.279748 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4028ab5e-8906-45fd-abd6-0954db71014e" path="/var/lib/kubelet/pods/4028ab5e-8906-45fd-abd6-0954db71014e/volumes" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.280448 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d488a05-dbc8-49d8-921b-7fe2b03a8eef" path="/var/lib/kubelet/pods/8d488a05-dbc8-49d8-921b-7fe2b03a8eef/volumes" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.282161 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a80e7e4b-45d2-4693-961e-5c9bc35290f8" path="/var/lib/kubelet/pods/a80e7e4b-45d2-4693-961e-5c9bc35290f8/volumes" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.283476 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad" path="/var/lib/kubelet/pods/aaebb05e-f2b8-482b-bd53-ab64b7fcc1ad/volumes" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.284453 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f816cdb8-5959-4df2-942d-cd6f7d47557a" path="/var/lib/kubelet/pods/f816cdb8-5959-4df2-942d-cd6f7d47557a/volumes" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.877017 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-dwsr6"] Dec 02 20:37:43 crc kubenswrapper[4796]: E1202 20:37:43.877649 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d488a05-dbc8-49d8-921b-7fe2b03a8eef" containerName="watcher-decision-engine" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.877669 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d488a05-dbc8-49d8-921b-7fe2b03a8eef" containerName="watcher-decision-engine" Dec 02 20:37:43 crc kubenswrapper[4796]: E1202 20:37:43.877685 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80e7e4b-45d2-4693-961e-5c9bc35290f8" containerName="extract-utilities" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.877693 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80e7e4b-45d2-4693-961e-5c9bc35290f8" containerName="extract-utilities" Dec 02 20:37:43 crc kubenswrapper[4796]: E1202 20:37:43.877707 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80e7e4b-45d2-4693-961e-5c9bc35290f8" containerName="extract-content" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.877713 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80e7e4b-45d2-4693-961e-5c9bc35290f8" containerName="extract-content" Dec 02 20:37:43 crc kubenswrapper[4796]: E1202 20:37:43.877723 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80e7e4b-45d2-4693-961e-5c9bc35290f8" containerName="registry-server" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.877729 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80e7e4b-45d2-4693-961e-5c9bc35290f8" containerName="registry-server" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.877908 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d488a05-dbc8-49d8-921b-7fe2b03a8eef" containerName="watcher-decision-engine" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.877925 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a80e7e4b-45d2-4693-961e-5c9bc35290f8" containerName="registry-server" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.878511 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-dwsr6" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.891785 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-dwsr6"] Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.939368 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8761c0a-f158-432e-992e-ef8dc6fd62de-operator-scripts\") pod \"watcher-db-create-dwsr6\" (UID: \"e8761c0a-f158-432e-992e-ef8dc6fd62de\") " pod="watcher-kuttl-default/watcher-db-create-dwsr6" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.939479 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6s4t\" (UniqueName: \"kubernetes.io/projected/e8761c0a-f158-432e-992e-ef8dc6fd62de-kube-api-access-l6s4t\") pod \"watcher-db-create-dwsr6\" (UID: \"e8761c0a-f158-432e-992e-ef8dc6fd62de\") " pod="watcher-kuttl-default/watcher-db-create-dwsr6" Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.994586 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-f764-account-create-update-hjxvr"] Dec 02 20:37:43 crc kubenswrapper[4796]: I1202 20:37:43.996579 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.000121 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.005787 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-f764-account-create-update-hjxvr"] Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.040882 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8761c0a-f158-432e-992e-ef8dc6fd62de-operator-scripts\") pod \"watcher-db-create-dwsr6\" (UID: \"e8761c0a-f158-432e-992e-ef8dc6fd62de\") " pod="watcher-kuttl-default/watcher-db-create-dwsr6" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.040950 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6s4t\" (UniqueName: \"kubernetes.io/projected/e8761c0a-f158-432e-992e-ef8dc6fd62de-kube-api-access-l6s4t\") pod \"watcher-db-create-dwsr6\" (UID: \"e8761c0a-f158-432e-992e-ef8dc6fd62de\") " pod="watcher-kuttl-default/watcher-db-create-dwsr6" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.042035 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8761c0a-f158-432e-992e-ef8dc6fd62de-operator-scripts\") pod \"watcher-db-create-dwsr6\" (UID: \"e8761c0a-f158-432e-992e-ef8dc6fd62de\") " pod="watcher-kuttl-default/watcher-db-create-dwsr6" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.056869 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6s4t\" (UniqueName: \"kubernetes.io/projected/e8761c0a-f158-432e-992e-ef8dc6fd62de-kube-api-access-l6s4t\") pod \"watcher-db-create-dwsr6\" (UID: \"e8761c0a-f158-432e-992e-ef8dc6fd62de\") " pod="watcher-kuttl-default/watcher-db-create-dwsr6" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.142835 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbkjq\" (UniqueName: \"kubernetes.io/projected/515bff56-7b38-4e50-932e-4eee69d04268-kube-api-access-kbkjq\") pod \"watcher-f764-account-create-update-hjxvr\" (UID: \"515bff56-7b38-4e50-932e-4eee69d04268\") " pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.143110 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/515bff56-7b38-4e50-932e-4eee69d04268-operator-scripts\") pod \"watcher-f764-account-create-update-hjxvr\" (UID: \"515bff56-7b38-4e50-932e-4eee69d04268\") " pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.194737 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55690485-e94e-48e1-9a93-30d0167cb17d","Type":"ContainerStarted","Data":"e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd"} Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.197268 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-dwsr6" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.244648 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbkjq\" (UniqueName: \"kubernetes.io/projected/515bff56-7b38-4e50-932e-4eee69d04268-kube-api-access-kbkjq\") pod \"watcher-f764-account-create-update-hjxvr\" (UID: \"515bff56-7b38-4e50-932e-4eee69d04268\") " pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.244718 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/515bff56-7b38-4e50-932e-4eee69d04268-operator-scripts\") pod \"watcher-f764-account-create-update-hjxvr\" (UID: \"515bff56-7b38-4e50-932e-4eee69d04268\") " pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.245785 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/515bff56-7b38-4e50-932e-4eee69d04268-operator-scripts\") pod \"watcher-f764-account-create-update-hjxvr\" (UID: \"515bff56-7b38-4e50-932e-4eee69d04268\") " pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.265847 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbkjq\" (UniqueName: \"kubernetes.io/projected/515bff56-7b38-4e50-932e-4eee69d04268-kube-api-access-kbkjq\") pod \"watcher-f764-account-create-update-hjxvr\" (UID: \"515bff56-7b38-4e50-932e-4eee69d04268\") " pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.316450 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" Dec 02 20:37:44 crc kubenswrapper[4796]: W1202 20:37:44.722487 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8761c0a_f158_432e_992e_ef8dc6fd62de.slice/crio-36a19aac1c8e96682249f6d7c4782d386df5c3612a8f0d284c3e1d6c0767cdd3 WatchSource:0}: Error finding container 36a19aac1c8e96682249f6d7c4782d386df5c3612a8f0d284c3e1d6c0767cdd3: Status 404 returned error can't find the container with id 36a19aac1c8e96682249f6d7c4782d386df5c3612a8f0d284c3e1d6c0767cdd3 Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.723388 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-dwsr6"] Dec 02 20:37:44 crc kubenswrapper[4796]: I1202 20:37:44.983660 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-f764-account-create-update-hjxvr"] Dec 02 20:37:44 crc kubenswrapper[4796]: W1202 20:37:44.992108 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod515bff56_7b38_4e50_932e_4eee69d04268.slice/crio-e4b109de6a77c43d5c270767fee50a9ad484f4867f5bfe7ce4b5382b63417656 WatchSource:0}: Error finding container e4b109de6a77c43d5c270767fee50a9ad484f4867f5bfe7ce4b5382b63417656: Status 404 returned error can't find the container with id e4b109de6a77c43d5c270767fee50a9ad484f4867f5bfe7ce4b5382b63417656 Dec 02 20:37:45 crc kubenswrapper[4796]: I1202 20:37:45.203869 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" event={"ID":"515bff56-7b38-4e50-932e-4eee69d04268","Type":"ContainerStarted","Data":"e4b109de6a77c43d5c270767fee50a9ad484f4867f5bfe7ce4b5382b63417656"} Dec 02 20:37:45 crc kubenswrapper[4796]: I1202 20:37:45.208078 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55690485-e94e-48e1-9a93-30d0167cb17d","Type":"ContainerStarted","Data":"95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c"} Dec 02 20:37:45 crc kubenswrapper[4796]: I1202 20:37:45.210155 4796 generic.go:334] "Generic (PLEG): container finished" podID="e8761c0a-f158-432e-992e-ef8dc6fd62de" containerID="c556a6c1b56b5c951afcf1365f2a132dc668407f177435ffdff0979a9b5a0f0d" exitCode=0 Dec 02 20:37:45 crc kubenswrapper[4796]: I1202 20:37:45.210208 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-dwsr6" event={"ID":"e8761c0a-f158-432e-992e-ef8dc6fd62de","Type":"ContainerDied","Data":"c556a6c1b56b5c951afcf1365f2a132dc668407f177435ffdff0979a9b5a0f0d"} Dec 02 20:37:45 crc kubenswrapper[4796]: I1202 20:37:45.210240 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-dwsr6" event={"ID":"e8761c0a-f158-432e-992e-ef8dc6fd62de","Type":"ContainerStarted","Data":"36a19aac1c8e96682249f6d7c4782d386df5c3612a8f0d284c3e1d6c0767cdd3"} Dec 02 20:37:46 crc kubenswrapper[4796]: I1202 20:37:46.236072 4796 generic.go:334] "Generic (PLEG): container finished" podID="515bff56-7b38-4e50-932e-4eee69d04268" containerID="a03a4b356050b4fa2297f8bc886c27a2381c1c565aa7f2c1b9a47258f297a073" exitCode=0 Dec 02 20:37:46 crc kubenswrapper[4796]: I1202 20:37:46.237667 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" event={"ID":"515bff56-7b38-4e50-932e-4eee69d04268","Type":"ContainerDied","Data":"a03a4b356050b4fa2297f8bc886c27a2381c1c565aa7f2c1b9a47258f297a073"} Dec 02 20:37:46 crc kubenswrapper[4796]: I1202 20:37:46.244242 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55690485-e94e-48e1-9a93-30d0167cb17d","Type":"ContainerStarted","Data":"27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba"} Dec 02 20:37:46 crc kubenswrapper[4796]: I1202 20:37:46.247419 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:37:46 crc kubenswrapper[4796]: I1202 20:37:46.316397 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.9526368170000001 podStartE2EDuration="5.316359345s" podCreationTimestamp="2025-12-02 20:37:41 +0000 UTC" firstStartedPulling="2025-12-02 20:37:42.310234033 +0000 UTC m=+1545.313609567" lastFinishedPulling="2025-12-02 20:37:45.673956571 +0000 UTC m=+1548.677332095" observedRunningTime="2025-12-02 20:37:46.305215127 +0000 UTC m=+1549.308590701" watchObservedRunningTime="2025-12-02 20:37:46.316359345 +0000 UTC m=+1549.319734909" Dec 02 20:37:46 crc kubenswrapper[4796]: I1202 20:37:46.659802 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-dwsr6" Dec 02 20:37:46 crc kubenswrapper[4796]: I1202 20:37:46.685083 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8761c0a-f158-432e-992e-ef8dc6fd62de-operator-scripts\") pod \"e8761c0a-f158-432e-992e-ef8dc6fd62de\" (UID: \"e8761c0a-f158-432e-992e-ef8dc6fd62de\") " Dec 02 20:37:46 crc kubenswrapper[4796]: I1202 20:37:46.685156 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6s4t\" (UniqueName: \"kubernetes.io/projected/e8761c0a-f158-432e-992e-ef8dc6fd62de-kube-api-access-l6s4t\") pod \"e8761c0a-f158-432e-992e-ef8dc6fd62de\" (UID: \"e8761c0a-f158-432e-992e-ef8dc6fd62de\") " Dec 02 20:37:46 crc kubenswrapper[4796]: I1202 20:37:46.687175 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8761c0a-f158-432e-992e-ef8dc6fd62de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8761c0a-f158-432e-992e-ef8dc6fd62de" (UID: "e8761c0a-f158-432e-992e-ef8dc6fd62de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:37:46 crc kubenswrapper[4796]: I1202 20:37:46.726220 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8761c0a-f158-432e-992e-ef8dc6fd62de-kube-api-access-l6s4t" (OuterVolumeSpecName: "kube-api-access-l6s4t") pod "e8761c0a-f158-432e-992e-ef8dc6fd62de" (UID: "e8761c0a-f158-432e-992e-ef8dc6fd62de"). InnerVolumeSpecName "kube-api-access-l6s4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:37:46 crc kubenswrapper[4796]: I1202 20:37:46.788582 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8761c0a-f158-432e-992e-ef8dc6fd62de-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:46 crc kubenswrapper[4796]: I1202 20:37:46.788643 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6s4t\" (UniqueName: \"kubernetes.io/projected/e8761c0a-f158-432e-992e-ef8dc6fd62de-kube-api-access-l6s4t\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:47 crc kubenswrapper[4796]: I1202 20:37:47.255313 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-dwsr6" event={"ID":"e8761c0a-f158-432e-992e-ef8dc6fd62de","Type":"ContainerDied","Data":"36a19aac1c8e96682249f6d7c4782d386df5c3612a8f0d284c3e1d6c0767cdd3"} Dec 02 20:37:47 crc kubenswrapper[4796]: I1202 20:37:47.255437 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-dwsr6" Dec 02 20:37:47 crc kubenswrapper[4796]: I1202 20:37:47.264015 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36a19aac1c8e96682249f6d7c4782d386df5c3612a8f0d284c3e1d6c0767cdd3" Dec 02 20:37:47 crc kubenswrapper[4796]: I1202 20:37:47.672352 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" Dec 02 20:37:47 crc kubenswrapper[4796]: I1202 20:37:47.724487 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbkjq\" (UniqueName: \"kubernetes.io/projected/515bff56-7b38-4e50-932e-4eee69d04268-kube-api-access-kbkjq\") pod \"515bff56-7b38-4e50-932e-4eee69d04268\" (UID: \"515bff56-7b38-4e50-932e-4eee69d04268\") " Dec 02 20:37:47 crc kubenswrapper[4796]: I1202 20:37:47.724608 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/515bff56-7b38-4e50-932e-4eee69d04268-operator-scripts\") pod \"515bff56-7b38-4e50-932e-4eee69d04268\" (UID: \"515bff56-7b38-4e50-932e-4eee69d04268\") " Dec 02 20:37:47 crc kubenswrapper[4796]: I1202 20:37:47.726314 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/515bff56-7b38-4e50-932e-4eee69d04268-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "515bff56-7b38-4e50-932e-4eee69d04268" (UID: "515bff56-7b38-4e50-932e-4eee69d04268"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:37:47 crc kubenswrapper[4796]: I1202 20:37:47.733372 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/515bff56-7b38-4e50-932e-4eee69d04268-kube-api-access-kbkjq" (OuterVolumeSpecName: "kube-api-access-kbkjq") pod "515bff56-7b38-4e50-932e-4eee69d04268" (UID: "515bff56-7b38-4e50-932e-4eee69d04268"). InnerVolumeSpecName "kube-api-access-kbkjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:37:47 crc kubenswrapper[4796]: I1202 20:37:47.826444 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/515bff56-7b38-4e50-932e-4eee69d04268-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:47 crc kubenswrapper[4796]: I1202 20:37:47.826475 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbkjq\" (UniqueName: \"kubernetes.io/projected/515bff56-7b38-4e50-932e-4eee69d04268-kube-api-access-kbkjq\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:48 crc kubenswrapper[4796]: I1202 20:37:48.265905 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" event={"ID":"515bff56-7b38-4e50-932e-4eee69d04268","Type":"ContainerDied","Data":"e4b109de6a77c43d5c270767fee50a9ad484f4867f5bfe7ce4b5382b63417656"} Dec 02 20:37:48 crc kubenswrapper[4796]: I1202 20:37:48.265940 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4b109de6a77c43d5c270767fee50a9ad484f4867f5bfe7ce4b5382b63417656" Dec 02 20:37:48 crc kubenswrapper[4796]: I1202 20:37:48.266013 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-f764-account-create-update-hjxvr" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.247865 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-mlksj"] Dec 02 20:37:49 crc kubenswrapper[4796]: E1202 20:37:49.248287 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515bff56-7b38-4e50-932e-4eee69d04268" containerName="mariadb-account-create-update" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.248309 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="515bff56-7b38-4e50-932e-4eee69d04268" containerName="mariadb-account-create-update" Dec 02 20:37:49 crc kubenswrapper[4796]: E1202 20:37:49.248327 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8761c0a-f158-432e-992e-ef8dc6fd62de" containerName="mariadb-database-create" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.248336 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8761c0a-f158-432e-992e-ef8dc6fd62de" containerName="mariadb-database-create" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.248548 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="515bff56-7b38-4e50-932e-4eee69d04268" containerName="mariadb-account-create-update" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.248580 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8761c0a-f158-432e-992e-ef8dc6fd62de" containerName="mariadb-database-create" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.249324 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.256299 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-mlksj"] Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.257332 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.257717 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-9q6xf" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.350926 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-db-sync-config-data\") pod \"watcher-kuttl-db-sync-mlksj\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.351008 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-mlksj\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.351225 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-config-data\") pod \"watcher-kuttl-db-sync-mlksj\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.351378 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6jgg\" (UniqueName: \"kubernetes.io/projected/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-kube-api-access-f6jgg\") pod \"watcher-kuttl-db-sync-mlksj\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.452905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6jgg\" (UniqueName: \"kubernetes.io/projected/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-kube-api-access-f6jgg\") pod \"watcher-kuttl-db-sync-mlksj\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.453027 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-db-sync-config-data\") pod \"watcher-kuttl-db-sync-mlksj\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.453063 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-mlksj\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.453102 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-config-data\") pod \"watcher-kuttl-db-sync-mlksj\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.457825 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-mlksj\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.459735 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-db-sync-config-data\") pod \"watcher-kuttl-db-sync-mlksj\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.460265 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-config-data\") pod \"watcher-kuttl-db-sync-mlksj\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.483078 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6jgg\" (UniqueName: \"kubernetes.io/projected/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-kube-api-access-f6jgg\") pod \"watcher-kuttl-db-sync-mlksj\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:49 crc kubenswrapper[4796]: I1202 20:37:49.564940 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:50 crc kubenswrapper[4796]: I1202 20:37:50.177567 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-mlksj"] Dec 02 20:37:50 crc kubenswrapper[4796]: I1202 20:37:50.298923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" event={"ID":"4b4160ff-1706-4609-a2cc-39d3ce42eeeb","Type":"ContainerStarted","Data":"705f1f9f1b0dcbe2343bd31b6ee007396dce6e373a43b0ca4092b6f62d488be3"} Dec 02 20:37:51 crc kubenswrapper[4796]: I1202 20:37:51.306669 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" event={"ID":"4b4160ff-1706-4609-a2cc-39d3ce42eeeb","Type":"ContainerStarted","Data":"448c4a0205d04bd542a12e6b1d636a513e6cad7a10a5b861b7f952db0ea0ad85"} Dec 02 20:37:51 crc kubenswrapper[4796]: I1202 20:37:51.327198 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" podStartSLOduration=2.3271805199999998 podStartE2EDuration="2.32718052s" podCreationTimestamp="2025-12-02 20:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:37:51.322653151 +0000 UTC m=+1554.326028675" watchObservedRunningTime="2025-12-02 20:37:51.32718052 +0000 UTC m=+1554.330556054" Dec 02 20:37:53 crc kubenswrapper[4796]: I1202 20:37:53.327919 4796 generic.go:334] "Generic (PLEG): container finished" podID="4b4160ff-1706-4609-a2cc-39d3ce42eeeb" containerID="448c4a0205d04bd542a12e6b1d636a513e6cad7a10a5b861b7f952db0ea0ad85" exitCode=0 Dec 02 20:37:53 crc kubenswrapper[4796]: I1202 20:37:53.328108 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" event={"ID":"4b4160ff-1706-4609-a2cc-39d3ce42eeeb","Type":"ContainerDied","Data":"448c4a0205d04bd542a12e6b1d636a513e6cad7a10a5b861b7f952db0ea0ad85"} Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.716960 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.754284 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-db-sync-config-data\") pod \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.754381 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-combined-ca-bundle\") pod \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.816564 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4b4160ff-1706-4609-a2cc-39d3ce42eeeb" (UID: "4b4160ff-1706-4609-a2cc-39d3ce42eeeb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.842878 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b4160ff-1706-4609-a2cc-39d3ce42eeeb" (UID: "4b4160ff-1706-4609-a2cc-39d3ce42eeeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.858792 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-config-data\") pod \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.858887 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6jgg\" (UniqueName: \"kubernetes.io/projected/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-kube-api-access-f6jgg\") pod \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\" (UID: \"4b4160ff-1706-4609-a2cc-39d3ce42eeeb\") " Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.859652 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.859673 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.868739 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-kube-api-access-f6jgg" (OuterVolumeSpecName: "kube-api-access-f6jgg") pod "4b4160ff-1706-4609-a2cc-39d3ce42eeeb" (UID: "4b4160ff-1706-4609-a2cc-39d3ce42eeeb"). InnerVolumeSpecName "kube-api-access-f6jgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.933618 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-config-data" (OuterVolumeSpecName: "config-data") pod "4b4160ff-1706-4609-a2cc-39d3ce42eeeb" (UID: "4b4160ff-1706-4609-a2cc-39d3ce42eeeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.960481 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:54 crc kubenswrapper[4796]: I1202 20:37:54.960517 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6jgg\" (UniqueName: \"kubernetes.io/projected/4b4160ff-1706-4609-a2cc-39d3ce42eeeb-kube-api-access-f6jgg\") on node \"crc\" DevicePath \"\"" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.189721 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.189812 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.189873 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.190895 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1"} pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.190960 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" containerID="cri-o://a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" gracePeriod=600 Dec 02 20:37:55 crc kubenswrapper[4796]: E1202 20:37:55.323040 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.357342 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.357367 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mlksj" event={"ID":"4b4160ff-1706-4609-a2cc-39d3ce42eeeb","Type":"ContainerDied","Data":"705f1f9f1b0dcbe2343bd31b6ee007396dce6e373a43b0ca4092b6f62d488be3"} Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.357605 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="705f1f9f1b0dcbe2343bd31b6ee007396dce6e373a43b0ca4092b6f62d488be3" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.362314 4796 generic.go:334] "Generic (PLEG): container finished" podID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" exitCode=0 Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.362400 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerDied","Data":"a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1"} Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.362450 4796 scope.go:117] "RemoveContainer" containerID="479b03b0d22f42a532c48eb369a41ad10eb068f11b5f4600bf6355106af1f04c" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.363152 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:37:55 crc kubenswrapper[4796]: E1202 20:37:55.363438 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.543470 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:37:55 crc kubenswrapper[4796]: E1202 20:37:55.543811 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b4160ff-1706-4609-a2cc-39d3ce42eeeb" containerName="watcher-kuttl-db-sync" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.543825 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b4160ff-1706-4609-a2cc-39d3ce42eeeb" containerName="watcher-kuttl-db-sync" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.543981 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b4160ff-1706-4609-a2cc-39d3ce42eeeb" containerName="watcher-kuttl-db-sync" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.548609 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.559866 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.562566 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-9q6xf" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.580009 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.581419 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.590039 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.614114 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.615622 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.624119 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.639492 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.650930 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.671700 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbjnw\" (UniqueName: \"kubernetes.io/projected/06468907-f110-40c0-909f-79278237c434-kube-api-access-qbjnw\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.671760 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.671814 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fksv\" (UniqueName: \"kubernetes.io/projected/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-kube-api-access-7fksv\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.671837 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.671864 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.671880 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.671900 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.672192 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.672236 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.672271 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.672289 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06468907-f110-40c0-909f-79278237c434-logs\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.672309 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.708987 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.710272 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.716648 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.723927 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774505 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774567 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774592 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774609 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06468907-f110-40c0-909f-79278237c434-logs\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774631 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774663 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbjnw\" (UniqueName: \"kubernetes.io/projected/06468907-f110-40c0-909f-79278237c434-kube-api-access-qbjnw\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774685 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774733 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fksv\" (UniqueName: \"kubernetes.io/projected/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-kube-api-access-7fksv\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774751 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6a3a31-f716-4852-a12b-052d2aec23f2-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774774 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774792 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqbbv\" (UniqueName: \"kubernetes.io/projected/5e6a3a31-f716-4852-a12b-052d2aec23f2-kube-api-access-xqbbv\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774815 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774830 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774849 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774869 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774896 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.774911 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.775355 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06468907-f110-40c0-909f-79278237c434-logs\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.776153 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.779579 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.781624 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.782232 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.783140 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.783359 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.784592 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.792131 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.792305 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.798865 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fksv\" (UniqueName: \"kubernetes.io/projected/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-kube-api-access-7fksv\") pod \"watcher-kuttl-api-0\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.800664 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbjnw\" (UniqueName: \"kubernetes.io/projected/06468907-f110-40c0-909f-79278237c434-kube-api-access-qbjnw\") pod \"watcher-kuttl-api-1\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.876540 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6a3a31-f716-4852-a12b-052d2aec23f2-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.876608 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.876740 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqbbv\" (UniqueName: \"kubernetes.io/projected/5e6a3a31-f716-4852-a12b-052d2aec23f2-kube-api-access-xqbbv\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.876759 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.876804 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.876839 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.876916 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.876966 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.877013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22230301-8cda-4fce-9e52-c9ab62d32c7b-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.877035 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj2v7\" (UniqueName: \"kubernetes.io/projected/22230301-8cda-4fce-9e52-c9ab62d32c7b-kube-api-access-mj2v7\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.877063 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.877880 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6a3a31-f716-4852-a12b-052d2aec23f2-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.882067 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.882184 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.883135 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.883627 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.897759 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqbbv\" (UniqueName: \"kubernetes.io/projected/5e6a3a31-f716-4852-a12b-052d2aec23f2-kube-api-access-xqbbv\") pod \"watcher-kuttl-applier-0\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.910788 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.941209 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.978940 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.978994 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.979063 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.979104 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22230301-8cda-4fce-9e52-c9ab62d32c7b-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.979124 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj2v7\" (UniqueName: \"kubernetes.io/projected/22230301-8cda-4fce-9e52-c9ab62d32c7b-kube-api-access-mj2v7\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.979155 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.980667 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22230301-8cda-4fce-9e52-c9ab62d32c7b-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.986401 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.987836 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.988049 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.988410 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:55 crc kubenswrapper[4796]: I1202 20:37:55.998555 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj2v7\" (UniqueName: \"kubernetes.io/projected/22230301-8cda-4fce-9e52-c9ab62d32c7b-kube-api-access-mj2v7\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:56 crc kubenswrapper[4796]: I1202 20:37:56.025211 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:37:56 crc kubenswrapper[4796]: I1202 20:37:56.509727 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:37:56 crc kubenswrapper[4796]: W1202 20:37:56.565401 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06468907_f110_40c0_909f_79278237c434.slice/crio-91ec5cb44a3c42ae36943aa9d57657f439362e18e894880e84e7e656235eaeb5 WatchSource:0}: Error finding container 91ec5cb44a3c42ae36943aa9d57657f439362e18e894880e84e7e656235eaeb5: Status 404 returned error can't find the container with id 91ec5cb44a3c42ae36943aa9d57657f439362e18e894880e84e7e656235eaeb5 Dec 02 20:37:56 crc kubenswrapper[4796]: I1202 20:37:56.580297 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 20:37:56 crc kubenswrapper[4796]: I1202 20:37:56.589217 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:37:56 crc kubenswrapper[4796]: I1202 20:37:56.625861 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.387055 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"06468907-f110-40c0-909f-79278237c434","Type":"ContainerStarted","Data":"79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7"} Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.387115 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"06468907-f110-40c0-909f-79278237c434","Type":"ContainerStarted","Data":"a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8"} Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.387125 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"06468907-f110-40c0-909f-79278237c434","Type":"ContainerStarted","Data":"91ec5cb44a3c42ae36943aa9d57657f439362e18e894880e84e7e656235eaeb5"} Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.388606 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.390818 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"22230301-8cda-4fce-9e52-c9ab62d32c7b","Type":"ContainerStarted","Data":"0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a"} Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.390872 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"22230301-8cda-4fce-9e52-c9ab62d32c7b","Type":"ContainerStarted","Data":"00117304c2504803c82aaf29ccd287675945869f86b3bc2add60902ba4733740"} Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.391242 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="06468907-f110-40c0-909f-79278237c434" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.178:9322/\": dial tcp 10.217.0.178:9322: connect: connection refused" Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.399680 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5e6a3a31-f716-4852-a12b-052d2aec23f2","Type":"ContainerStarted","Data":"f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3"} Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.400036 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5e6a3a31-f716-4852-a12b-052d2aec23f2","Type":"ContainerStarted","Data":"dfbd7bf235cc775fa3317a54fd8250c773a486ab7667202adcd3b85f0de5e65f"} Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.405008 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7fbcf272-2b8d-4b5f-98eb-88dce65767b2","Type":"ContainerStarted","Data":"c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1"} Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.405235 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7fbcf272-2b8d-4b5f-98eb-88dce65767b2","Type":"ContainerStarted","Data":"a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9"} Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.405354 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.405466 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7fbcf272-2b8d-4b5f-98eb-88dce65767b2","Type":"ContainerStarted","Data":"6285e1abcaf08a032d5a06c1d1551bb80a04d19d99dae37ef0a16944d7ec41b3"} Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.435063 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.435032847 podStartE2EDuration="2.435032847s" podCreationTimestamp="2025-12-02 20:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:37:57.408416418 +0000 UTC m=+1560.411791952" watchObservedRunningTime="2025-12-02 20:37:57.435032847 +0000 UTC m=+1560.438408381" Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.466188 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.466168916 podStartE2EDuration="2.466168916s" podCreationTimestamp="2025-12-02 20:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:37:57.459946117 +0000 UTC m=+1560.463321671" watchObservedRunningTime="2025-12-02 20:37:57.466168916 +0000 UTC m=+1560.469544450" Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.520313 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.5202903770000002 podStartE2EDuration="2.520290377s" podCreationTimestamp="2025-12-02 20:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:37:57.51289567 +0000 UTC m=+1560.516271204" watchObservedRunningTime="2025-12-02 20:37:57.520290377 +0000 UTC m=+1560.523665911" Dec 02 20:37:57 crc kubenswrapper[4796]: I1202 20:37:57.572724 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.572700798 podStartE2EDuration="2.572700798s" podCreationTimestamp="2025-12-02 20:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:37:57.565872553 +0000 UTC m=+1560.569248087" watchObservedRunningTime="2025-12-02 20:37:57.572700798 +0000 UTC m=+1560.576076332" Dec 02 20:37:59 crc kubenswrapper[4796]: I1202 20:37:59.420328 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 20:38:00 crc kubenswrapper[4796]: I1202 20:38:00.100487 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:00 crc kubenswrapper[4796]: I1202 20:38:00.670170 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:38:00 crc kubenswrapper[4796]: I1202 20:38:00.884409 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:00 crc kubenswrapper[4796]: I1202 20:38:00.911532 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:38:00 crc kubenswrapper[4796]: I1202 20:38:00.941855 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:05 crc kubenswrapper[4796]: I1202 20:38:05.884581 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:05 crc kubenswrapper[4796]: I1202 20:38:05.890726 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:05 crc kubenswrapper[4796]: I1202 20:38:05.912377 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:38:05 crc kubenswrapper[4796]: I1202 20:38:05.919204 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:38:05 crc kubenswrapper[4796]: I1202 20:38:05.942155 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:05 crc kubenswrapper[4796]: I1202 20:38:05.969179 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:06 crc kubenswrapper[4796]: I1202 20:38:06.026959 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:06 crc kubenswrapper[4796]: I1202 20:38:06.060220 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:06 crc kubenswrapper[4796]: I1202 20:38:06.490799 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:06 crc kubenswrapper[4796]: I1202 20:38:06.498383 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:06 crc kubenswrapper[4796]: I1202 20:38:06.498733 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:38:06 crc kubenswrapper[4796]: I1202 20:38:06.532040 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:06 crc kubenswrapper[4796]: I1202 20:38:06.544130 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:07 crc kubenswrapper[4796]: I1202 20:38:07.270440 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:38:07 crc kubenswrapper[4796]: E1202 20:38:07.270946 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:38:09 crc kubenswrapper[4796]: I1202 20:38:09.197774 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:38:09 crc kubenswrapper[4796]: I1202 20:38:09.198142 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="ceilometer-central-agent" containerID="cri-o://97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7" gracePeriod=30 Dec 02 20:38:09 crc kubenswrapper[4796]: I1202 20:38:09.198234 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="sg-core" containerID="cri-o://95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c" gracePeriod=30 Dec 02 20:38:09 crc kubenswrapper[4796]: I1202 20:38:09.198266 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="ceilometer-notification-agent" containerID="cri-o://e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd" gracePeriod=30 Dec 02 20:38:09 crc kubenswrapper[4796]: I1202 20:38:09.198266 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="proxy-httpd" containerID="cri-o://27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba" gracePeriod=30 Dec 02 20:38:09 crc kubenswrapper[4796]: I1202 20:38:09.220572 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.173:3000/\": EOF" Dec 02 20:38:09 crc kubenswrapper[4796]: I1202 20:38:09.518928 4796 generic.go:334] "Generic (PLEG): container finished" podID="55690485-e94e-48e1-9a93-30d0167cb17d" containerID="27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba" exitCode=0 Dec 02 20:38:09 crc kubenswrapper[4796]: I1202 20:38:09.518984 4796 generic.go:334] "Generic (PLEG): container finished" podID="55690485-e94e-48e1-9a93-30d0167cb17d" containerID="95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c" exitCode=2 Dec 02 20:38:09 crc kubenswrapper[4796]: I1202 20:38:09.519018 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55690485-e94e-48e1-9a93-30d0167cb17d","Type":"ContainerDied","Data":"27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba"} Dec 02 20:38:09 crc kubenswrapper[4796]: I1202 20:38:09.519059 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55690485-e94e-48e1-9a93-30d0167cb17d","Type":"ContainerDied","Data":"95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c"} Dec 02 20:38:10 crc kubenswrapper[4796]: I1202 20:38:10.566326 4796 generic.go:334] "Generic (PLEG): container finished" podID="55690485-e94e-48e1-9a93-30d0167cb17d" containerID="97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7" exitCode=0 Dec 02 20:38:10 crc kubenswrapper[4796]: I1202 20:38:10.566558 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55690485-e94e-48e1-9a93-30d0167cb17d","Type":"ContainerDied","Data":"97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7"} Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.329905 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.505435 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-ceilometer-tls-certs\") pod \"55690485-e94e-48e1-9a93-30d0167cb17d\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.505492 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-scripts\") pod \"55690485-e94e-48e1-9a93-30d0167cb17d\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.505576 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-config-data\") pod \"55690485-e94e-48e1-9a93-30d0167cb17d\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.505596 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55690485-e94e-48e1-9a93-30d0167cb17d-log-httpd\") pod \"55690485-e94e-48e1-9a93-30d0167cb17d\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.505666 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drl9l\" (UniqueName: \"kubernetes.io/projected/55690485-e94e-48e1-9a93-30d0167cb17d-kube-api-access-drl9l\") pod \"55690485-e94e-48e1-9a93-30d0167cb17d\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.505693 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55690485-e94e-48e1-9a93-30d0167cb17d-run-httpd\") pod \"55690485-e94e-48e1-9a93-30d0167cb17d\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.505722 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-sg-core-conf-yaml\") pod \"55690485-e94e-48e1-9a93-30d0167cb17d\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.505743 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-combined-ca-bundle\") pod \"55690485-e94e-48e1-9a93-30d0167cb17d\" (UID: \"55690485-e94e-48e1-9a93-30d0167cb17d\") " Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.506902 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55690485-e94e-48e1-9a93-30d0167cb17d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "55690485-e94e-48e1-9a93-30d0167cb17d" (UID: "55690485-e94e-48e1-9a93-30d0167cb17d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.507004 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55690485-e94e-48e1-9a93-30d0167cb17d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "55690485-e94e-48e1-9a93-30d0167cb17d" (UID: "55690485-e94e-48e1-9a93-30d0167cb17d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.513576 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-scripts" (OuterVolumeSpecName: "scripts") pod "55690485-e94e-48e1-9a93-30d0167cb17d" (UID: "55690485-e94e-48e1-9a93-30d0167cb17d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.513789 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55690485-e94e-48e1-9a93-30d0167cb17d-kube-api-access-drl9l" (OuterVolumeSpecName: "kube-api-access-drl9l") pod "55690485-e94e-48e1-9a93-30d0167cb17d" (UID: "55690485-e94e-48e1-9a93-30d0167cb17d"). InnerVolumeSpecName "kube-api-access-drl9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.544506 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "55690485-e94e-48e1-9a93-30d0167cb17d" (UID: "55690485-e94e-48e1-9a93-30d0167cb17d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.554874 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "55690485-e94e-48e1-9a93-30d0167cb17d" (UID: "55690485-e94e-48e1-9a93-30d0167cb17d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.578871 4796 generic.go:334] "Generic (PLEG): container finished" podID="55690485-e94e-48e1-9a93-30d0167cb17d" containerID="e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd" exitCode=0 Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.578917 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55690485-e94e-48e1-9a93-30d0167cb17d","Type":"ContainerDied","Data":"e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd"} Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.578934 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.578955 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55690485-e94e-48e1-9a93-30d0167cb17d","Type":"ContainerDied","Data":"9fd1493dfceea65cf863a1227b0b489bf41d2239a1abf21257b2518445e7b0c3"} Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.578975 4796 scope.go:117] "RemoveContainer" containerID="27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.581048 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55690485-e94e-48e1-9a93-30d0167cb17d" (UID: "55690485-e94e-48e1-9a93-30d0167cb17d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.597976 4796 scope.go:117] "RemoveContainer" containerID="95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.607504 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.607537 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.607548 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55690485-e94e-48e1-9a93-30d0167cb17d-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.607558 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drl9l\" (UniqueName: \"kubernetes.io/projected/55690485-e94e-48e1-9a93-30d0167cb17d-kube-api-access-drl9l\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.607568 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55690485-e94e-48e1-9a93-30d0167cb17d-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.608143 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.608163 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.616071 4796 scope.go:117] "RemoveContainer" containerID="e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.619524 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-config-data" (OuterVolumeSpecName: "config-data") pod "55690485-e94e-48e1-9a93-30d0167cb17d" (UID: "55690485-e94e-48e1-9a93-30d0167cb17d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.633440 4796 scope.go:117] "RemoveContainer" containerID="97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.651859 4796 scope.go:117] "RemoveContainer" containerID="27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba" Dec 02 20:38:11 crc kubenswrapper[4796]: E1202 20:38:11.652353 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba\": container with ID starting with 27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba not found: ID does not exist" containerID="27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.652386 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba"} err="failed to get container status \"27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba\": rpc error: code = NotFound desc = could not find container \"27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba\": container with ID starting with 27eba010589333586390b4e84f280071b43571511672f678bfa76d02e655f9ba not found: ID does not exist" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.652406 4796 scope.go:117] "RemoveContainer" containerID="95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c" Dec 02 20:38:11 crc kubenswrapper[4796]: E1202 20:38:11.652960 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c\": container with ID starting with 95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c not found: ID does not exist" containerID="95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.652981 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c"} err="failed to get container status \"95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c\": rpc error: code = NotFound desc = could not find container \"95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c\": container with ID starting with 95bac6b18181e976764938af5c50f58af70725e173c159fa868298581bdda97c not found: ID does not exist" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.652998 4796 scope.go:117] "RemoveContainer" containerID="e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd" Dec 02 20:38:11 crc kubenswrapper[4796]: E1202 20:38:11.653344 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd\": container with ID starting with e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd not found: ID does not exist" containerID="e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.653364 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd"} err="failed to get container status \"e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd\": rpc error: code = NotFound desc = could not find container \"e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd\": container with ID starting with e7040e5f4b4a83f8c4a20c324516ffff23076124b101be2bde61e952056c5dfd not found: ID does not exist" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.653376 4796 scope.go:117] "RemoveContainer" containerID="97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7" Dec 02 20:38:11 crc kubenswrapper[4796]: E1202 20:38:11.653578 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7\": container with ID starting with 97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7 not found: ID does not exist" containerID="97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.653599 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7"} err="failed to get container status \"97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7\": rpc error: code = NotFound desc = could not find container \"97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7\": container with ID starting with 97c936e0e01d2819f29f1dde3c9703dadc7a7ce07eb37fd0b87bddb7812974b7 not found: ID does not exist" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.710991 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55690485-e94e-48e1-9a93-30d0167cb17d-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.927067 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.937155 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.960821 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:38:11 crc kubenswrapper[4796]: E1202 20:38:11.961218 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="sg-core" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.961238 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="sg-core" Dec 02 20:38:11 crc kubenswrapper[4796]: E1202 20:38:11.961268 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="ceilometer-notification-agent" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.961275 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="ceilometer-notification-agent" Dec 02 20:38:11 crc kubenswrapper[4796]: E1202 20:38:11.961296 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="ceilometer-central-agent" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.961302 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="ceilometer-central-agent" Dec 02 20:38:11 crc kubenswrapper[4796]: E1202 20:38:11.961318 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="proxy-httpd" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.961324 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="proxy-httpd" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.961475 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="proxy-httpd" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.961500 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="ceilometer-notification-agent" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.961509 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="ceilometer-central-agent" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.961516 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" containerName="sg-core" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.963043 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.965373 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.965793 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.965986 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:38:11 crc kubenswrapper[4796]: I1202 20:38:11.986603 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.119676 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.119841 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.119907 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bae6b3a-baf0-4d07-9780-9efa866909a4-log-httpd\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.119958 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-config-data\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.120037 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-scripts\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.120067 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.120243 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lqp4\" (UniqueName: \"kubernetes.io/projected/3bae6b3a-baf0-4d07-9780-9efa866909a4-kube-api-access-7lqp4\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.120436 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bae6b3a-baf0-4d07-9780-9efa866909a4-run-httpd\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.221789 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.221866 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.221901 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bae6b3a-baf0-4d07-9780-9efa866909a4-log-httpd\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.221928 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-config-data\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.221957 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-scripts\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.221977 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.222026 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lqp4\" (UniqueName: \"kubernetes.io/projected/3bae6b3a-baf0-4d07-9780-9efa866909a4-kube-api-access-7lqp4\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.222060 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bae6b3a-baf0-4d07-9780-9efa866909a4-run-httpd\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.222704 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bae6b3a-baf0-4d07-9780-9efa866909a4-run-httpd\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.222725 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bae6b3a-baf0-4d07-9780-9efa866909a4-log-httpd\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.226007 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.226156 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-config-data\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.226758 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.228031 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-scripts\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.232821 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.253815 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lqp4\" (UniqueName: \"kubernetes.io/projected/3bae6b3a-baf0-4d07-9780-9efa866909a4-kube-api-access-7lqp4\") pod \"ceilometer-0\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.306326 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:12 crc kubenswrapper[4796]: W1202 20:38:12.826550 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bae6b3a_baf0_4d07_9780_9efa866909a4.slice/crio-da77b7d620f87809ff543d11860ab9b867934caf6f1424a311e7974498288300 WatchSource:0}: Error finding container da77b7d620f87809ff543d11860ab9b867934caf6f1424a311e7974498288300: Status 404 returned error can't find the container with id da77b7d620f87809ff543d11860ab9b867934caf6f1424a311e7974498288300 Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.830245 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 20:38:12 crc kubenswrapper[4796]: I1202 20:38:12.835534 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:38:13 crc kubenswrapper[4796]: I1202 20:38:13.282112 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55690485-e94e-48e1-9a93-30d0167cb17d" path="/var/lib/kubelet/pods/55690485-e94e-48e1-9a93-30d0167cb17d/volumes" Dec 02 20:38:13 crc kubenswrapper[4796]: I1202 20:38:13.599386 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3bae6b3a-baf0-4d07-9780-9efa866909a4","Type":"ContainerStarted","Data":"da77b7d620f87809ff543d11860ab9b867934caf6f1424a311e7974498288300"} Dec 02 20:38:14 crc kubenswrapper[4796]: I1202 20:38:14.610552 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3bae6b3a-baf0-4d07-9780-9efa866909a4","Type":"ContainerStarted","Data":"58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b"} Dec 02 20:38:14 crc kubenswrapper[4796]: I1202 20:38:14.611443 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3bae6b3a-baf0-4d07-9780-9efa866909a4","Type":"ContainerStarted","Data":"c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366"} Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.621721 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3bae6b3a-baf0-4d07-9780-9efa866909a4","Type":"ContainerStarted","Data":"1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0"} Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.751332 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.753211 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.770690 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.897885 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.898132 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d60fd7d7-a5a1-42d7-8cb1-f26304800218-logs\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.898375 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.898422 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.898502 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbckk\" (UniqueName: \"kubernetes.io/projected/d60fd7d7-a5a1-42d7-8cb1-f26304800218-kube-api-access-pbckk\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.898546 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.999575 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d60fd7d7-a5a1-42d7-8cb1-f26304800218-logs\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.999648 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.999672 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.999700 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbckk\" (UniqueName: \"kubernetes.io/projected/d60fd7d7-a5a1-42d7-8cb1-f26304800218-kube-api-access-pbckk\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.999723 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:15 crc kubenswrapper[4796]: I1202 20:38:15.999774 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:16 crc kubenswrapper[4796]: I1202 20:38:16.001214 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d60fd7d7-a5a1-42d7-8cb1-f26304800218-logs\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:16 crc kubenswrapper[4796]: I1202 20:38:16.008925 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:16 crc kubenswrapper[4796]: I1202 20:38:16.019458 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:16 crc kubenswrapper[4796]: I1202 20:38:16.019840 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:16 crc kubenswrapper[4796]: I1202 20:38:16.023922 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbckk\" (UniqueName: \"kubernetes.io/projected/d60fd7d7-a5a1-42d7-8cb1-f26304800218-kube-api-access-pbckk\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:16 crc kubenswrapper[4796]: I1202 20:38:16.025319 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:16 crc kubenswrapper[4796]: I1202 20:38:16.071113 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:16 crc kubenswrapper[4796]: I1202 20:38:16.543264 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Dec 02 20:38:16 crc kubenswrapper[4796]: I1202 20:38:16.630693 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"d60fd7d7-a5a1-42d7-8cb1-f26304800218","Type":"ContainerStarted","Data":"dabd41dfac8646370f3416bb5849082c3f2f312d30d0c155f213379f6bbb2c93"} Dec 02 20:38:17 crc kubenswrapper[4796]: I1202 20:38:17.645545 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"d60fd7d7-a5a1-42d7-8cb1-f26304800218","Type":"ContainerStarted","Data":"4e87962bb1ba0cfaa517f073c2d8fe6de1c7c26d553768a3ad48401148111c02"} Dec 02 20:38:17 crc kubenswrapper[4796]: I1202 20:38:17.646068 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"d60fd7d7-a5a1-42d7-8cb1-f26304800218","Type":"ContainerStarted","Data":"2ebb6b05c1b1e70a104c5eccb04bed4e1982b1985585af15cb4f67130dcd4e1d"} Dec 02 20:38:17 crc kubenswrapper[4796]: I1202 20:38:17.646113 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:17 crc kubenswrapper[4796]: I1202 20:38:17.650451 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3bae6b3a-baf0-4d07-9780-9efa866909a4","Type":"ContainerStarted","Data":"b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4"} Dec 02 20:38:17 crc kubenswrapper[4796]: I1202 20:38:17.650779 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:17 crc kubenswrapper[4796]: I1202 20:38:17.696130 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-2" podStartSLOduration=2.696109087 podStartE2EDuration="2.696109087s" podCreationTimestamp="2025-12-02 20:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:38:17.689454037 +0000 UTC m=+1580.692829601" watchObservedRunningTime="2025-12-02 20:38:17.696109087 +0000 UTC m=+1580.699484621" Dec 02 20:38:17 crc kubenswrapper[4796]: I1202 20:38:17.729376 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.633818174 podStartE2EDuration="6.729358627s" podCreationTimestamp="2025-12-02 20:38:11 +0000 UTC" firstStartedPulling="2025-12-02 20:38:12.829945088 +0000 UTC m=+1575.833320622" lastFinishedPulling="2025-12-02 20:38:16.925485541 +0000 UTC m=+1579.928861075" observedRunningTime="2025-12-02 20:38:17.72118835 +0000 UTC m=+1580.724563924" watchObservedRunningTime="2025-12-02 20:38:17.729358627 +0000 UTC m=+1580.732734161" Dec 02 20:38:18 crc kubenswrapper[4796]: I1202 20:38:18.333078 4796 scope.go:117] "RemoveContainer" containerID="8efb4c542d0b77bb2aadefbac6d81ecaca7108b0fe67dd73777a32d7ca9b6bde" Dec 02 20:38:18 crc kubenswrapper[4796]: I1202 20:38:18.408414 4796 scope.go:117] "RemoveContainer" containerID="953318fa009c34af9afc15fdfb8ea644384e81dc112a7334158e887d2d33642f" Dec 02 20:38:18 crc kubenswrapper[4796]: I1202 20:38:18.457576 4796 scope.go:117] "RemoveContainer" containerID="b935eb123366328babe6d3481a7516c1e8f6803b67c07915c61ca84a3783e547" Dec 02 20:38:19 crc kubenswrapper[4796]: I1202 20:38:19.270763 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:38:19 crc kubenswrapper[4796]: E1202 20:38:19.271898 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:38:19 crc kubenswrapper[4796]: I1202 20:38:19.757208 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:21 crc kubenswrapper[4796]: I1202 20:38:21.071663 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:26 crc kubenswrapper[4796]: I1202 20:38:26.071786 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:26 crc kubenswrapper[4796]: I1202 20:38:26.077433 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:26 crc kubenswrapper[4796]: I1202 20:38:26.752149 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:27 crc kubenswrapper[4796]: I1202 20:38:27.350738 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Dec 02 20:38:27 crc kubenswrapper[4796]: I1202 20:38:27.373555 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 20:38:27 crc kubenswrapper[4796]: I1202 20:38:27.373824 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="06468907-f110-40c0-909f-79278237c434" containerName="watcher-kuttl-api-log" containerID="cri-o://a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8" gracePeriod=30 Dec 02 20:38:27 crc kubenswrapper[4796]: I1202 20:38:27.374379 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="06468907-f110-40c0-909f-79278237c434" containerName="watcher-api" containerID="cri-o://79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7" gracePeriod=30 Dec 02 20:38:27 crc kubenswrapper[4796]: I1202 20:38:27.753396 4796 generic.go:334] "Generic (PLEG): container finished" podID="06468907-f110-40c0-909f-79278237c434" containerID="a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8" exitCode=143 Dec 02 20:38:27 crc kubenswrapper[4796]: I1202 20:38:27.754811 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"06468907-f110-40c0-909f-79278237c434","Type":"ContainerDied","Data":"a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8"} Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.695315 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.740771 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-combined-ca-bundle\") pod \"06468907-f110-40c0-909f-79278237c434\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.740876 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06468907-f110-40c0-909f-79278237c434-logs\") pod \"06468907-f110-40c0-909f-79278237c434\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.740901 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-config-data\") pod \"06468907-f110-40c0-909f-79278237c434\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.740941 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-custom-prometheus-ca\") pod \"06468907-f110-40c0-909f-79278237c434\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.741008 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbjnw\" (UniqueName: \"kubernetes.io/projected/06468907-f110-40c0-909f-79278237c434-kube-api-access-qbjnw\") pod \"06468907-f110-40c0-909f-79278237c434\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.741092 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-cert-memcached-mtls\") pod \"06468907-f110-40c0-909f-79278237c434\" (UID: \"06468907-f110-40c0-909f-79278237c434\") " Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.748150 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06468907-f110-40c0-909f-79278237c434-logs" (OuterVolumeSpecName: "logs") pod "06468907-f110-40c0-909f-79278237c434" (UID: "06468907-f110-40c0-909f-79278237c434"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.753556 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06468907-f110-40c0-909f-79278237c434-kube-api-access-qbjnw" (OuterVolumeSpecName: "kube-api-access-qbjnw") pod "06468907-f110-40c0-909f-79278237c434" (UID: "06468907-f110-40c0-909f-79278237c434"). InnerVolumeSpecName "kube-api-access-qbjnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.767390 4796 generic.go:334] "Generic (PLEG): container finished" podID="06468907-f110-40c0-909f-79278237c434" containerID="79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7" exitCode=0 Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.767612 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="d60fd7d7-a5a1-42d7-8cb1-f26304800218" containerName="watcher-kuttl-api-log" containerID="cri-o://2ebb6b05c1b1e70a104c5eccb04bed4e1982b1985585af15cb4f67130dcd4e1d" gracePeriod=30 Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.767916 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.768279 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"06468907-f110-40c0-909f-79278237c434","Type":"ContainerDied","Data":"79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7"} Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.768313 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"06468907-f110-40c0-909f-79278237c434","Type":"ContainerDied","Data":"91ec5cb44a3c42ae36943aa9d57657f439362e18e894880e84e7e656235eaeb5"} Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.768330 4796 scope.go:117] "RemoveContainer" containerID="79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.768643 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="d60fd7d7-a5a1-42d7-8cb1-f26304800218" containerName="watcher-api" containerID="cri-o://4e87962bb1ba0cfaa517f073c2d8fe6de1c7c26d553768a3ad48401148111c02" gracePeriod=30 Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.775477 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "06468907-f110-40c0-909f-79278237c434" (UID: "06468907-f110-40c0-909f-79278237c434"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.810409 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06468907-f110-40c0-909f-79278237c434" (UID: "06468907-f110-40c0-909f-79278237c434"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.822597 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-config-data" (OuterVolumeSpecName: "config-data") pod "06468907-f110-40c0-909f-79278237c434" (UID: "06468907-f110-40c0-909f-79278237c434"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.843357 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbjnw\" (UniqueName: \"kubernetes.io/projected/06468907-f110-40c0-909f-79278237c434-kube-api-access-qbjnw\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.843400 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.843414 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06468907-f110-40c0-909f-79278237c434-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.843429 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.843441 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.844754 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "06468907-f110-40c0-909f-79278237c434" (UID: "06468907-f110-40c0-909f-79278237c434"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.876217 4796 scope.go:117] "RemoveContainer" containerID="a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.900702 4796 scope.go:117] "RemoveContainer" containerID="79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7" Dec 02 20:38:28 crc kubenswrapper[4796]: E1202 20:38:28.901147 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7\": container with ID starting with 79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7 not found: ID does not exist" containerID="79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.901195 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7"} err="failed to get container status \"79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7\": rpc error: code = NotFound desc = could not find container \"79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7\": container with ID starting with 79f0a149baa4f1ff4bce762ba4a157a430eb1bc2485a7af4143304a155d402f7 not found: ID does not exist" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.901228 4796 scope.go:117] "RemoveContainer" containerID="a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8" Dec 02 20:38:28 crc kubenswrapper[4796]: E1202 20:38:28.901703 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8\": container with ID starting with a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8 not found: ID does not exist" containerID="a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.901731 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8"} err="failed to get container status \"a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8\": rpc error: code = NotFound desc = could not find container \"a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8\": container with ID starting with a23035f1bcbe05937d62cc7b47d1be370f0f6e3fa01127cc890e29182fb1a1d8 not found: ID does not exist" Dec 02 20:38:28 crc kubenswrapper[4796]: I1202 20:38:28.945117 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/06468907-f110-40c0-909f-79278237c434-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.107876 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.118834 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.280932 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06468907-f110-40c0-909f-79278237c434" path="/var/lib/kubelet/pods/06468907-f110-40c0-909f-79278237c434/volumes" Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.778109 4796 generic.go:334] "Generic (PLEG): container finished" podID="d60fd7d7-a5a1-42d7-8cb1-f26304800218" containerID="4e87962bb1ba0cfaa517f073c2d8fe6de1c7c26d553768a3ad48401148111c02" exitCode=0 Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.778521 4796 generic.go:334] "Generic (PLEG): container finished" podID="d60fd7d7-a5a1-42d7-8cb1-f26304800218" containerID="2ebb6b05c1b1e70a104c5eccb04bed4e1982b1985585af15cb4f67130dcd4e1d" exitCode=143 Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.778571 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"d60fd7d7-a5a1-42d7-8cb1-f26304800218","Type":"ContainerDied","Data":"4e87962bb1ba0cfaa517f073c2d8fe6de1c7c26d553768a3ad48401148111c02"} Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.778603 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"d60fd7d7-a5a1-42d7-8cb1-f26304800218","Type":"ContainerDied","Data":"2ebb6b05c1b1e70a104c5eccb04bed4e1982b1985585af15cb4f67130dcd4e1d"} Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.778619 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"d60fd7d7-a5a1-42d7-8cb1-f26304800218","Type":"ContainerDied","Data":"dabd41dfac8646370f3416bb5849082c3f2f312d30d0c155f213379f6bbb2c93"} Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.778632 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dabd41dfac8646370f3416bb5849082c3f2f312d30d0c155f213379f6bbb2c93" Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.851757 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.960926 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-config-data\") pod \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.961004 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-custom-prometheus-ca\") pod \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.961073 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbckk\" (UniqueName: \"kubernetes.io/projected/d60fd7d7-a5a1-42d7-8cb1-f26304800218-kube-api-access-pbckk\") pod \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.961153 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d60fd7d7-a5a1-42d7-8cb1-f26304800218-logs\") pod \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.961212 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-cert-memcached-mtls\") pod \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.961286 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-combined-ca-bundle\") pod \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\" (UID: \"d60fd7d7-a5a1-42d7-8cb1-f26304800218\") " Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.962224 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d60fd7d7-a5a1-42d7-8cb1-f26304800218-logs" (OuterVolumeSpecName: "logs") pod "d60fd7d7-a5a1-42d7-8cb1-f26304800218" (UID: "d60fd7d7-a5a1-42d7-8cb1-f26304800218"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:38:29 crc kubenswrapper[4796]: I1202 20:38:29.978488 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60fd7d7-a5a1-42d7-8cb1-f26304800218-kube-api-access-pbckk" (OuterVolumeSpecName: "kube-api-access-pbckk") pod "d60fd7d7-a5a1-42d7-8cb1-f26304800218" (UID: "d60fd7d7-a5a1-42d7-8cb1-f26304800218"). InnerVolumeSpecName "kube-api-access-pbckk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.004940 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "d60fd7d7-a5a1-42d7-8cb1-f26304800218" (UID: "d60fd7d7-a5a1-42d7-8cb1-f26304800218"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.016066 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d60fd7d7-a5a1-42d7-8cb1-f26304800218" (UID: "d60fd7d7-a5a1-42d7-8cb1-f26304800218"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.018481 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-config-data" (OuterVolumeSpecName: "config-data") pod "d60fd7d7-a5a1-42d7-8cb1-f26304800218" (UID: "d60fd7d7-a5a1-42d7-8cb1-f26304800218"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.047859 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "d60fd7d7-a5a1-42d7-8cb1-f26304800218" (UID: "d60fd7d7-a5a1-42d7-8cb1-f26304800218"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.063879 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.064055 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.064161 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbckk\" (UniqueName: \"kubernetes.io/projected/d60fd7d7-a5a1-42d7-8cb1-f26304800218-kube-api-access-pbckk\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.064239 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d60fd7d7-a5a1-42d7-8cb1-f26304800218-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.064340 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.064421 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60fd7d7-a5a1-42d7-8cb1-f26304800218-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.384592 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-srvxv"] Dec 02 20:38:30 crc kubenswrapper[4796]: E1202 20:38:30.384978 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60fd7d7-a5a1-42d7-8cb1-f26304800218" containerName="watcher-api" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.384994 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60fd7d7-a5a1-42d7-8cb1-f26304800218" containerName="watcher-api" Dec 02 20:38:30 crc kubenswrapper[4796]: E1202 20:38:30.385014 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60fd7d7-a5a1-42d7-8cb1-f26304800218" containerName="watcher-kuttl-api-log" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.385023 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60fd7d7-a5a1-42d7-8cb1-f26304800218" containerName="watcher-kuttl-api-log" Dec 02 20:38:30 crc kubenswrapper[4796]: E1202 20:38:30.385044 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06468907-f110-40c0-909f-79278237c434" containerName="watcher-api" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.385052 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="06468907-f110-40c0-909f-79278237c434" containerName="watcher-api" Dec 02 20:38:30 crc kubenswrapper[4796]: E1202 20:38:30.385075 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06468907-f110-40c0-909f-79278237c434" containerName="watcher-kuttl-api-log" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.385082 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="06468907-f110-40c0-909f-79278237c434" containerName="watcher-kuttl-api-log" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.385324 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="06468907-f110-40c0-909f-79278237c434" containerName="watcher-kuttl-api-log" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.385344 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60fd7d7-a5a1-42d7-8cb1-f26304800218" containerName="watcher-kuttl-api-log" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.385362 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="06468907-f110-40c0-909f-79278237c434" containerName="watcher-api" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.385388 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60fd7d7-a5a1-42d7-8cb1-f26304800218" containerName="watcher-api" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.386879 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.397509 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-srvxv"] Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.471293 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2bd55aa-8650-4335-93f1-82633f597f1a-catalog-content\") pod \"community-operators-srvxv\" (UID: \"c2bd55aa-8650-4335-93f1-82633f597f1a\") " pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.471412 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2bd55aa-8650-4335-93f1-82633f597f1a-utilities\") pod \"community-operators-srvxv\" (UID: \"c2bd55aa-8650-4335-93f1-82633f597f1a\") " pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.471478 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdm4q\" (UniqueName: \"kubernetes.io/projected/c2bd55aa-8650-4335-93f1-82633f597f1a-kube-api-access-cdm4q\") pod \"community-operators-srvxv\" (UID: \"c2bd55aa-8650-4335-93f1-82633f597f1a\") " pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.573455 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdm4q\" (UniqueName: \"kubernetes.io/projected/c2bd55aa-8650-4335-93f1-82633f597f1a-kube-api-access-cdm4q\") pod \"community-operators-srvxv\" (UID: \"c2bd55aa-8650-4335-93f1-82633f597f1a\") " pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.573535 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2bd55aa-8650-4335-93f1-82633f597f1a-catalog-content\") pod \"community-operators-srvxv\" (UID: \"c2bd55aa-8650-4335-93f1-82633f597f1a\") " pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.573671 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2bd55aa-8650-4335-93f1-82633f597f1a-utilities\") pod \"community-operators-srvxv\" (UID: \"c2bd55aa-8650-4335-93f1-82633f597f1a\") " pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.574189 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2bd55aa-8650-4335-93f1-82633f597f1a-catalog-content\") pod \"community-operators-srvxv\" (UID: \"c2bd55aa-8650-4335-93f1-82633f597f1a\") " pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.574211 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2bd55aa-8650-4335-93f1-82633f597f1a-utilities\") pod \"community-operators-srvxv\" (UID: \"c2bd55aa-8650-4335-93f1-82633f597f1a\") " pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.590699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdm4q\" (UniqueName: \"kubernetes.io/projected/c2bd55aa-8650-4335-93f1-82633f597f1a-kube-api-access-cdm4q\") pod \"community-operators-srvxv\" (UID: \"c2bd55aa-8650-4335-93f1-82633f597f1a\") " pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.744681 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.789996 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.883189 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Dec 02 20:38:30 crc kubenswrapper[4796]: I1202 20:38:30.893383 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Dec 02 20:38:31 crc kubenswrapper[4796]: I1202 20:38:31.275719 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60fd7d7-a5a1-42d7-8cb1-f26304800218" path="/var/lib/kubelet/pods/d60fd7d7-a5a1-42d7-8cb1-f26304800218/volumes" Dec 02 20:38:31 crc kubenswrapper[4796]: I1202 20:38:31.318919 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-srvxv"] Dec 02 20:38:31 crc kubenswrapper[4796]: I1202 20:38:31.597215 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:38:31 crc kubenswrapper[4796]: I1202 20:38:31.597548 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7fbcf272-2b8d-4b5f-98eb-88dce65767b2" containerName="watcher-kuttl-api-log" containerID="cri-o://a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9" gracePeriod=30 Dec 02 20:38:31 crc kubenswrapper[4796]: I1202 20:38:31.597681 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7fbcf272-2b8d-4b5f-98eb-88dce65767b2" containerName="watcher-api" containerID="cri-o://c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1" gracePeriod=30 Dec 02 20:38:31 crc kubenswrapper[4796]: I1202 20:38:31.805781 4796 generic.go:334] "Generic (PLEG): container finished" podID="c2bd55aa-8650-4335-93f1-82633f597f1a" containerID="ae4abcec7059169122fda6d4d1c856aa07919e556fa90db0523370229a51d780" exitCode=0 Dec 02 20:38:31 crc kubenswrapper[4796]: I1202 20:38:31.806725 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srvxv" event={"ID":"c2bd55aa-8650-4335-93f1-82633f597f1a","Type":"ContainerDied","Data":"ae4abcec7059169122fda6d4d1c856aa07919e556fa90db0523370229a51d780"} Dec 02 20:38:31 crc kubenswrapper[4796]: I1202 20:38:31.806760 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srvxv" event={"ID":"c2bd55aa-8650-4335-93f1-82633f597f1a","Type":"ContainerStarted","Data":"e31b4aada1fc0c9f877e5d552390df5b290628fcb58428f2b67a7d1380de22db"} Dec 02 20:38:31 crc kubenswrapper[4796]: I1202 20:38:31.816845 4796 generic.go:334] "Generic (PLEG): container finished" podID="7fbcf272-2b8d-4b5f-98eb-88dce65767b2" containerID="a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9" exitCode=143 Dec 02 20:38:31 crc kubenswrapper[4796]: I1202 20:38:31.817176 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7fbcf272-2b8d-4b5f-98eb-88dce65767b2","Type":"ContainerDied","Data":"a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9"} Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.439381 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.510701 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-logs\") pod \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.510884 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-combined-ca-bundle\") pod \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.510916 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-config-data\") pod \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.510941 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-custom-prometheus-ca\") pod \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.511043 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fksv\" (UniqueName: \"kubernetes.io/projected/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-kube-api-access-7fksv\") pod \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.511074 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-cert-memcached-mtls\") pod \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\" (UID: \"7fbcf272-2b8d-4b5f-98eb-88dce65767b2\") " Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.511529 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-logs" (OuterVolumeSpecName: "logs") pod "7fbcf272-2b8d-4b5f-98eb-88dce65767b2" (UID: "7fbcf272-2b8d-4b5f-98eb-88dce65767b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.534470 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-kube-api-access-7fksv" (OuterVolumeSpecName: "kube-api-access-7fksv") pod "7fbcf272-2b8d-4b5f-98eb-88dce65767b2" (UID: "7fbcf272-2b8d-4b5f-98eb-88dce65767b2"). InnerVolumeSpecName "kube-api-access-7fksv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.566366 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fbcf272-2b8d-4b5f-98eb-88dce65767b2" (UID: "7fbcf272-2b8d-4b5f-98eb-88dce65767b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.599376 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7fbcf272-2b8d-4b5f-98eb-88dce65767b2" (UID: "7fbcf272-2b8d-4b5f-98eb-88dce65767b2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.602589 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-config-data" (OuterVolumeSpecName: "config-data") pod "7fbcf272-2b8d-4b5f-98eb-88dce65767b2" (UID: "7fbcf272-2b8d-4b5f-98eb-88dce65767b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.613122 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fksv\" (UniqueName: \"kubernetes.io/projected/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-kube-api-access-7fksv\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.613159 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.613174 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.613187 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.613199 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.636351 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "7fbcf272-2b8d-4b5f-98eb-88dce65767b2" (UID: "7fbcf272-2b8d-4b5f-98eb-88dce65767b2"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.714655 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fbcf272-2b8d-4b5f-98eb-88dce65767b2-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.778153 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-mlksj"] Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.787637 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-mlksj"] Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.827471 4796 generic.go:334] "Generic (PLEG): container finished" podID="7fbcf272-2b8d-4b5f-98eb-88dce65767b2" containerID="c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1" exitCode=0 Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.827565 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7fbcf272-2b8d-4b5f-98eb-88dce65767b2","Type":"ContainerDied","Data":"c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1"} Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.827596 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7fbcf272-2b8d-4b5f-98eb-88dce65767b2","Type":"ContainerDied","Data":"6285e1abcaf08a032d5a06c1d1551bb80a04d19d99dae37ef0a16944d7ec41b3"} Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.827616 4796 scope.go:117] "RemoveContainer" containerID="c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.827953 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.830089 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srvxv" event={"ID":"c2bd55aa-8650-4335-93f1-82633f597f1a","Type":"ContainerStarted","Data":"3293b99b9df40e3800f179bebf2fe033233948ae256e12def08de95c6c526f47"} Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.845236 4796 scope.go:117] "RemoveContainer" containerID="a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.868616 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.868886 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="22230301-8cda-4fce-9e52-c9ab62d32c7b" containerName="watcher-decision-engine" containerID="cri-o://0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a" gracePeriod=30 Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.870956 4796 scope.go:117] "RemoveContainer" containerID="c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1" Dec 02 20:38:32 crc kubenswrapper[4796]: E1202 20:38:32.872921 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1\": container with ID starting with c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1 not found: ID does not exist" containerID="c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.872969 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1"} err="failed to get container status \"c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1\": rpc error: code = NotFound desc = could not find container \"c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1\": container with ID starting with c5a969fef48ac55ce20d0e368f58e72fab2d2c203b7e9c31e0aa02048967c7b1 not found: ID does not exist" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.873008 4796 scope.go:117] "RemoveContainer" containerID="a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9" Dec 02 20:38:32 crc kubenswrapper[4796]: E1202 20:38:32.873713 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9\": container with ID starting with a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9 not found: ID does not exist" containerID="a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.873743 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9"} err="failed to get container status \"a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9\": rpc error: code = NotFound desc = could not find container \"a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9\": container with ID starting with a9e5f1c13a346e0e625cc23d7f14ebb299b11b9c96b2432939d76e07836705b9 not found: ID does not exist" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.888738 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.896624 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.932557 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherf764-account-delete-xj68l"] Dec 02 20:38:32 crc kubenswrapper[4796]: E1202 20:38:32.932868 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbcf272-2b8d-4b5f-98eb-88dce65767b2" containerName="watcher-api" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.932884 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbcf272-2b8d-4b5f-98eb-88dce65767b2" containerName="watcher-api" Dec 02 20:38:32 crc kubenswrapper[4796]: E1202 20:38:32.932910 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbcf272-2b8d-4b5f-98eb-88dce65767b2" containerName="watcher-kuttl-api-log" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.932917 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbcf272-2b8d-4b5f-98eb-88dce65767b2" containerName="watcher-kuttl-api-log" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.933071 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbcf272-2b8d-4b5f-98eb-88dce65767b2" containerName="watcher-api" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.933092 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbcf272-2b8d-4b5f-98eb-88dce65767b2" containerName="watcher-kuttl-api-log" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.933672 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" Dec 02 20:38:32 crc kubenswrapper[4796]: I1202 20:38:32.958812 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherf764-account-delete-xj68l"] Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.019730 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8372152d-c7a7-47d0-8d4c-631a39588398-operator-scripts\") pod \"watcherf764-account-delete-xj68l\" (UID: \"8372152d-c7a7-47d0-8d4c-631a39588398\") " pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.020035 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz5s6\" (UniqueName: \"kubernetes.io/projected/8372152d-c7a7-47d0-8d4c-631a39588398-kube-api-access-xz5s6\") pod \"watcherf764-account-delete-xj68l\" (UID: \"8372152d-c7a7-47d0-8d4c-631a39588398\") " pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.143272 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.143494 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="5e6a3a31-f716-4852-a12b-052d2aec23f2" containerName="watcher-applier" containerID="cri-o://f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3" gracePeriod=30 Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.148411 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz5s6\" (UniqueName: \"kubernetes.io/projected/8372152d-c7a7-47d0-8d4c-631a39588398-kube-api-access-xz5s6\") pod \"watcherf764-account-delete-xj68l\" (UID: \"8372152d-c7a7-47d0-8d4c-631a39588398\") " pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.148964 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8372152d-c7a7-47d0-8d4c-631a39588398-operator-scripts\") pod \"watcherf764-account-delete-xj68l\" (UID: \"8372152d-c7a7-47d0-8d4c-631a39588398\") " pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.149951 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8372152d-c7a7-47d0-8d4c-631a39588398-operator-scripts\") pod \"watcherf764-account-delete-xj68l\" (UID: \"8372152d-c7a7-47d0-8d4c-631a39588398\") " pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.189385 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz5s6\" (UniqueName: \"kubernetes.io/projected/8372152d-c7a7-47d0-8d4c-631a39588398-kube-api-access-xz5s6\") pod \"watcherf764-account-delete-xj68l\" (UID: \"8372152d-c7a7-47d0-8d4c-631a39588398\") " pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.281975 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b4160ff-1706-4609-a2cc-39d3ce42eeeb" path="/var/lib/kubelet/pods/4b4160ff-1706-4609-a2cc-39d3ce42eeeb/volumes" Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.282783 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fbcf272-2b8d-4b5f-98eb-88dce65767b2" path="/var/lib/kubelet/pods/7fbcf272-2b8d-4b5f-98eb-88dce65767b2/volumes" Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.321992 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.839924 4796 generic.go:334] "Generic (PLEG): container finished" podID="c2bd55aa-8650-4335-93f1-82633f597f1a" containerID="3293b99b9df40e3800f179bebf2fe033233948ae256e12def08de95c6c526f47" exitCode=0 Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.839994 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srvxv" event={"ID":"c2bd55aa-8650-4335-93f1-82633f597f1a","Type":"ContainerDied","Data":"3293b99b9df40e3800f179bebf2fe033233948ae256e12def08de95c6c526f47"} Dec 02 20:38:33 crc kubenswrapper[4796]: I1202 20:38:33.887958 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherf764-account-delete-xj68l"] Dec 02 20:38:34 crc kubenswrapper[4796]: I1202 20:38:34.264910 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:38:34 crc kubenswrapper[4796]: E1202 20:38:34.265691 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:38:34 crc kubenswrapper[4796]: I1202 20:38:34.855270 4796 generic.go:334] "Generic (PLEG): container finished" podID="8372152d-c7a7-47d0-8d4c-631a39588398" containerID="5c623060c714e15333e88b22dd21193efbaf6c2dc3e5595aa77e5605d43804d8" exitCode=0 Dec 02 20:38:34 crc kubenswrapper[4796]: I1202 20:38:34.855346 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" event={"ID":"8372152d-c7a7-47d0-8d4c-631a39588398","Type":"ContainerDied","Data":"5c623060c714e15333e88b22dd21193efbaf6c2dc3e5595aa77e5605d43804d8"} Dec 02 20:38:34 crc kubenswrapper[4796]: I1202 20:38:34.856109 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" event={"ID":"8372152d-c7a7-47d0-8d4c-631a39588398","Type":"ContainerStarted","Data":"2980deb7093dae2f5c7bfc81bdce4ffba31d68537bc08b6771f9c8b48415ffbc"} Dec 02 20:38:34 crc kubenswrapper[4796]: I1202 20:38:34.859132 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srvxv" event={"ID":"c2bd55aa-8650-4335-93f1-82633f597f1a","Type":"ContainerStarted","Data":"8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554"} Dec 02 20:38:34 crc kubenswrapper[4796]: I1202 20:38:34.889646 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-srvxv" podStartSLOduration=2.364888691 podStartE2EDuration="4.889623468s" podCreationTimestamp="2025-12-02 20:38:30 +0000 UTC" firstStartedPulling="2025-12-02 20:38:31.807610604 +0000 UTC m=+1594.810986148" lastFinishedPulling="2025-12-02 20:38:34.332345381 +0000 UTC m=+1597.335720925" observedRunningTime="2025-12-02 20:38:34.886537725 +0000 UTC m=+1597.889913249" watchObservedRunningTime="2025-12-02 20:38:34.889623468 +0000 UTC m=+1597.892999002" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.634667 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.635036 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="ceilometer-central-agent" containerID="cri-o://c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366" gracePeriod=30 Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.635123 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="ceilometer-notification-agent" containerID="cri-o://58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b" gracePeriod=30 Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.635123 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="sg-core" containerID="cri-o://1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0" gracePeriod=30 Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.635123 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="proxy-httpd" containerID="cri-o://b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4" gracePeriod=30 Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.662481 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.737942 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.794165 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-cert-memcached-mtls\") pod \"5e6a3a31-f716-4852-a12b-052d2aec23f2\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.795487 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-config-data\") pod \"5e6a3a31-f716-4852-a12b-052d2aec23f2\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.795556 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqbbv\" (UniqueName: \"kubernetes.io/projected/5e6a3a31-f716-4852-a12b-052d2aec23f2-kube-api-access-xqbbv\") pod \"5e6a3a31-f716-4852-a12b-052d2aec23f2\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.795642 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-combined-ca-bundle\") pod \"5e6a3a31-f716-4852-a12b-052d2aec23f2\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.795714 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6a3a31-f716-4852-a12b-052d2aec23f2-logs\") pod \"5e6a3a31-f716-4852-a12b-052d2aec23f2\" (UID: \"5e6a3a31-f716-4852-a12b-052d2aec23f2\") " Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.796410 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6a3a31-f716-4852-a12b-052d2aec23f2-logs" (OuterVolumeSpecName: "logs") pod "5e6a3a31-f716-4852-a12b-052d2aec23f2" (UID: "5e6a3a31-f716-4852-a12b-052d2aec23f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.815579 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6a3a31-f716-4852-a12b-052d2aec23f2-kube-api-access-xqbbv" (OuterVolumeSpecName: "kube-api-access-xqbbv") pod "5e6a3a31-f716-4852-a12b-052d2aec23f2" (UID: "5e6a3a31-f716-4852-a12b-052d2aec23f2"). InnerVolumeSpecName "kube-api-access-xqbbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.848559 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e6a3a31-f716-4852-a12b-052d2aec23f2" (UID: "5e6a3a31-f716-4852-a12b-052d2aec23f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.869395 4796 generic.go:334] "Generic (PLEG): container finished" podID="5e6a3a31-f716-4852-a12b-052d2aec23f2" containerID="f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3" exitCode=0 Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.869525 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5e6a3a31-f716-4852-a12b-052d2aec23f2","Type":"ContainerDied","Data":"f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3"} Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.869598 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5e6a3a31-f716-4852-a12b-052d2aec23f2","Type":"ContainerDied","Data":"dfbd7bf235cc775fa3317a54fd8250c773a486ab7667202adcd3b85f0de5e65f"} Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.869657 4796 scope.go:117] "RemoveContainer" containerID="f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.869810 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.884716 4796 generic.go:334] "Generic (PLEG): container finished" podID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerID="b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4" exitCode=0 Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.884740 4796 generic.go:334] "Generic (PLEG): container finished" podID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerID="1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0" exitCode=2 Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.885713 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3bae6b3a-baf0-4d07-9780-9efa866909a4","Type":"ContainerDied","Data":"b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4"} Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.885738 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3bae6b3a-baf0-4d07-9780-9efa866909a4","Type":"ContainerDied","Data":"1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0"} Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.897961 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6a3a31-f716-4852-a12b-052d2aec23f2-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.897991 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqbbv\" (UniqueName: \"kubernetes.io/projected/5e6a3a31-f716-4852-a12b-052d2aec23f2-kube-api-access-xqbbv\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.898003 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.899142 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-config-data" (OuterVolumeSpecName: "config-data") pod "5e6a3a31-f716-4852-a12b-052d2aec23f2" (UID: "5e6a3a31-f716-4852-a12b-052d2aec23f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.915451 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "5e6a3a31-f716-4852-a12b-052d2aec23f2" (UID: "5e6a3a31-f716-4852-a12b-052d2aec23f2"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.915469 4796 scope.go:117] "RemoveContainer" containerID="f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3" Dec 02 20:38:35 crc kubenswrapper[4796]: E1202 20:38:35.929356 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3\": container with ID starting with f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3 not found: ID does not exist" containerID="f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3" Dec 02 20:38:35 crc kubenswrapper[4796]: I1202 20:38:35.929408 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3"} err="failed to get container status \"f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3\": rpc error: code = NotFound desc = could not find container \"f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3\": container with ID starting with f16967c870de66ead8789ad9d3bbd72f41a4ceeb29699389ed0470953516a5c3 not found: ID does not exist" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.009073 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.009119 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6a3a31-f716-4852-a12b-052d2aec23f2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.180912 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.218306 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.228265 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.314584 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz5s6\" (UniqueName: \"kubernetes.io/projected/8372152d-c7a7-47d0-8d4c-631a39588398-kube-api-access-xz5s6\") pod \"8372152d-c7a7-47d0-8d4c-631a39588398\" (UID: \"8372152d-c7a7-47d0-8d4c-631a39588398\") " Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.315211 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8372152d-c7a7-47d0-8d4c-631a39588398-operator-scripts\") pod \"8372152d-c7a7-47d0-8d4c-631a39588398\" (UID: \"8372152d-c7a7-47d0-8d4c-631a39588398\") " Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.317985 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8372152d-c7a7-47d0-8d4c-631a39588398-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8372152d-c7a7-47d0-8d4c-631a39588398" (UID: "8372152d-c7a7-47d0-8d4c-631a39588398"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.323529 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8372152d-c7a7-47d0-8d4c-631a39588398-kube-api-access-xz5s6" (OuterVolumeSpecName: "kube-api-access-xz5s6") pod "8372152d-c7a7-47d0-8d4c-631a39588398" (UID: "8372152d-c7a7-47d0-8d4c-631a39588398"). InnerVolumeSpecName "kube-api-access-xz5s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.416800 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz5s6\" (UniqueName: \"kubernetes.io/projected/8372152d-c7a7-47d0-8d4c-631a39588398-kube-api-access-xz5s6\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.416832 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8372152d-c7a7-47d0-8d4c-631a39588398-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.772637 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.828073 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-combined-ca-bundle\") pod \"22230301-8cda-4fce-9e52-c9ab62d32c7b\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.828245 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22230301-8cda-4fce-9e52-c9ab62d32c7b-logs\") pod \"22230301-8cda-4fce-9e52-c9ab62d32c7b\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.828383 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj2v7\" (UniqueName: \"kubernetes.io/projected/22230301-8cda-4fce-9e52-c9ab62d32c7b-kube-api-access-mj2v7\") pod \"22230301-8cda-4fce-9e52-c9ab62d32c7b\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.828578 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22230301-8cda-4fce-9e52-c9ab62d32c7b-logs" (OuterVolumeSpecName: "logs") pod "22230301-8cda-4fce-9e52-c9ab62d32c7b" (UID: "22230301-8cda-4fce-9e52-c9ab62d32c7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.828989 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-cert-memcached-mtls\") pod \"22230301-8cda-4fce-9e52-c9ab62d32c7b\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.829078 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-custom-prometheus-ca\") pod \"22230301-8cda-4fce-9e52-c9ab62d32c7b\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.829136 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-config-data\") pod \"22230301-8cda-4fce-9e52-c9ab62d32c7b\" (UID: \"22230301-8cda-4fce-9e52-c9ab62d32c7b\") " Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.829631 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22230301-8cda-4fce-9e52-c9ab62d32c7b-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.834500 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22230301-8cda-4fce-9e52-c9ab62d32c7b-kube-api-access-mj2v7" (OuterVolumeSpecName: "kube-api-access-mj2v7") pod "22230301-8cda-4fce-9e52-c9ab62d32c7b" (UID: "22230301-8cda-4fce-9e52-c9ab62d32c7b"). InnerVolumeSpecName "kube-api-access-mj2v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.932119 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj2v7\" (UniqueName: \"kubernetes.io/projected/22230301-8cda-4fce-9e52-c9ab62d32c7b-kube-api-access-mj2v7\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.970397 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22230301-8cda-4fce-9e52-c9ab62d32c7b" (UID: "22230301-8cda-4fce-9e52-c9ab62d32c7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.974473 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "22230301-8cda-4fce-9e52-c9ab62d32c7b" (UID: "22230301-8cda-4fce-9e52-c9ab62d32c7b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.980364 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "22230301-8cda-4fce-9e52-c9ab62d32c7b" (UID: "22230301-8cda-4fce-9e52-c9ab62d32c7b"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.991277 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-config-data" (OuterVolumeSpecName: "config-data") pod "22230301-8cda-4fce-9e52-c9ab62d32c7b" (UID: "22230301-8cda-4fce-9e52-c9ab62d32c7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.995678 4796 generic.go:334] "Generic (PLEG): container finished" podID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerID="c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366" exitCode=0 Dec 02 20:38:36 crc kubenswrapper[4796]: I1202 20:38:36.995745 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3bae6b3a-baf0-4d07-9780-9efa866909a4","Type":"ContainerDied","Data":"c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366"} Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.011672 4796 generic.go:334] "Generic (PLEG): container finished" podID="22230301-8cda-4fce-9e52-c9ab62d32c7b" containerID="0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a" exitCode=0 Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.011718 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"22230301-8cda-4fce-9e52-c9ab62d32c7b","Type":"ContainerDied","Data":"0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a"} Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.011736 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"22230301-8cda-4fce-9e52-c9ab62d32c7b","Type":"ContainerDied","Data":"00117304c2504803c82aaf29ccd287675945869f86b3bc2add60902ba4733740"} Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.011753 4796 scope.go:117] "RemoveContainer" containerID="0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a" Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.011861 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.035640 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.035665 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.035675 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.035684 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22230301-8cda-4fce-9e52-c9ab62d32c7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.039189 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" event={"ID":"8372152d-c7a7-47d0-8d4c-631a39588398","Type":"ContainerDied","Data":"2980deb7093dae2f5c7bfc81bdce4ffba31d68537bc08b6771f9c8b48415ffbc"} Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.039242 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2980deb7093dae2f5c7bfc81bdce4ffba31d68537bc08b6771f9c8b48415ffbc" Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.039323 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherf764-account-delete-xj68l" Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.061483 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.067380 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.070722 4796 scope.go:117] "RemoveContainer" containerID="0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a" Dec 02 20:38:37 crc kubenswrapper[4796]: E1202 20:38:37.071149 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a\": container with ID starting with 0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a not found: ID does not exist" containerID="0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a" Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.071185 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a"} err="failed to get container status \"0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a\": rpc error: code = NotFound desc = could not find container \"0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a\": container with ID starting with 0f945b13aee69ba7fc427a0ea1b0f5278b51f1c229b00b0141fd1125673e3d2a not found: ID does not exist" Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.280246 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22230301-8cda-4fce-9e52-c9ab62d32c7b" path="/var/lib/kubelet/pods/22230301-8cda-4fce-9e52-c9ab62d32c7b/volumes" Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.280986 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6a3a31-f716-4852-a12b-052d2aec23f2" path="/var/lib/kubelet/pods/5e6a3a31-f716-4852-a12b-052d2aec23f2/volumes" Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.970012 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-dwsr6"] Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.977435 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherf764-account-delete-xj68l"] Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.983794 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherf764-account-delete-xj68l"] Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.990341 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-dwsr6"] Dec 02 20:38:37 crc kubenswrapper[4796]: I1202 20:38:37.996904 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-f764-account-create-update-hjxvr"] Dec 02 20:38:38 crc kubenswrapper[4796]: I1202 20:38:38.002278 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-f764-account-create-update-hjxvr"] Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.277586 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="515bff56-7b38-4e50-932e-4eee69d04268" path="/var/lib/kubelet/pods/515bff56-7b38-4e50-932e-4eee69d04268/volumes" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.278570 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8372152d-c7a7-47d0-8d4c-631a39588398" path="/var/lib/kubelet/pods/8372152d-c7a7-47d0-8d4c-631a39588398/volumes" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.279119 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8761c0a-f158-432e-992e-ef8dc6fd62de" path="/var/lib/kubelet/pods/e8761c0a-f158-432e-992e-ef8dc6fd62de/volumes" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.388178 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-jgr9z"] Dec 02 20:38:39 crc kubenswrapper[4796]: E1202 20:38:39.388532 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8372152d-c7a7-47d0-8d4c-631a39588398" containerName="mariadb-account-delete" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.388548 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8372152d-c7a7-47d0-8d4c-631a39588398" containerName="mariadb-account-delete" Dec 02 20:38:39 crc kubenswrapper[4796]: E1202 20:38:39.388571 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22230301-8cda-4fce-9e52-c9ab62d32c7b" containerName="watcher-decision-engine" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.388577 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="22230301-8cda-4fce-9e52-c9ab62d32c7b" containerName="watcher-decision-engine" Dec 02 20:38:39 crc kubenswrapper[4796]: E1202 20:38:39.388611 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6a3a31-f716-4852-a12b-052d2aec23f2" containerName="watcher-applier" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.388618 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6a3a31-f716-4852-a12b-052d2aec23f2" containerName="watcher-applier" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.388760 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6a3a31-f716-4852-a12b-052d2aec23f2" containerName="watcher-applier" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.388776 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="22230301-8cda-4fce-9e52-c9ab62d32c7b" containerName="watcher-decision-engine" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.388783 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8372152d-c7a7-47d0-8d4c-631a39588398" containerName="mariadb-account-delete" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.389344 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jgr9z" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.408728 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9"] Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.410621 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.417223 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.455892 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jgr9z"] Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.472068 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9"] Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.477837 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwmmt\" (UniqueName: \"kubernetes.io/projected/c19eaf38-ecbd-469a-a0b3-78028623baac-kube-api-access-fwmmt\") pod \"watcher-cb37-account-create-update-xf2w9\" (UID: \"c19eaf38-ecbd-469a-a0b3-78028623baac\") " pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.477994 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjwg\" (UniqueName: \"kubernetes.io/projected/94d81d37-8e9d-47e1-b1de-415f2c7a5024-kube-api-access-vwjwg\") pod \"watcher-db-create-jgr9z\" (UID: \"94d81d37-8e9d-47e1-b1de-415f2c7a5024\") " pod="watcher-kuttl-default/watcher-db-create-jgr9z" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.478145 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d81d37-8e9d-47e1-b1de-415f2c7a5024-operator-scripts\") pod \"watcher-db-create-jgr9z\" (UID: \"94d81d37-8e9d-47e1-b1de-415f2c7a5024\") " pod="watcher-kuttl-default/watcher-db-create-jgr9z" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.478362 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c19eaf38-ecbd-469a-a0b3-78028623baac-operator-scripts\") pod \"watcher-cb37-account-create-update-xf2w9\" (UID: \"c19eaf38-ecbd-469a-a0b3-78028623baac\") " pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.580325 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwjwg\" (UniqueName: \"kubernetes.io/projected/94d81d37-8e9d-47e1-b1de-415f2c7a5024-kube-api-access-vwjwg\") pod \"watcher-db-create-jgr9z\" (UID: \"94d81d37-8e9d-47e1-b1de-415f2c7a5024\") " pod="watcher-kuttl-default/watcher-db-create-jgr9z" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.580388 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d81d37-8e9d-47e1-b1de-415f2c7a5024-operator-scripts\") pod \"watcher-db-create-jgr9z\" (UID: \"94d81d37-8e9d-47e1-b1de-415f2c7a5024\") " pod="watcher-kuttl-default/watcher-db-create-jgr9z" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.580492 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c19eaf38-ecbd-469a-a0b3-78028623baac-operator-scripts\") pod \"watcher-cb37-account-create-update-xf2w9\" (UID: \"c19eaf38-ecbd-469a-a0b3-78028623baac\") " pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.580566 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwmmt\" (UniqueName: \"kubernetes.io/projected/c19eaf38-ecbd-469a-a0b3-78028623baac-kube-api-access-fwmmt\") pod \"watcher-cb37-account-create-update-xf2w9\" (UID: \"c19eaf38-ecbd-469a-a0b3-78028623baac\") " pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.581314 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d81d37-8e9d-47e1-b1de-415f2c7a5024-operator-scripts\") pod \"watcher-db-create-jgr9z\" (UID: \"94d81d37-8e9d-47e1-b1de-415f2c7a5024\") " pod="watcher-kuttl-default/watcher-db-create-jgr9z" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.582022 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c19eaf38-ecbd-469a-a0b3-78028623baac-operator-scripts\") pod \"watcher-cb37-account-create-update-xf2w9\" (UID: \"c19eaf38-ecbd-469a-a0b3-78028623baac\") " pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.609432 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwmmt\" (UniqueName: \"kubernetes.io/projected/c19eaf38-ecbd-469a-a0b3-78028623baac-kube-api-access-fwmmt\") pod \"watcher-cb37-account-create-update-xf2w9\" (UID: \"c19eaf38-ecbd-469a-a0b3-78028623baac\") " pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.627168 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwjwg\" (UniqueName: \"kubernetes.io/projected/94d81d37-8e9d-47e1-b1de-415f2c7a5024-kube-api-access-vwjwg\") pod \"watcher-db-create-jgr9z\" (UID: \"94d81d37-8e9d-47e1-b1de-415f2c7a5024\") " pod="watcher-kuttl-default/watcher-db-create-jgr9z" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.703870 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jgr9z" Dec 02 20:38:39 crc kubenswrapper[4796]: I1202 20:38:39.745851 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.217934 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jgr9z"] Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.358229 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9"] Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.508016 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.595419 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bae6b3a-baf0-4d07-9780-9efa866909a4-log-httpd\") pod \"3bae6b3a-baf0-4d07-9780-9efa866909a4\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.595469 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-combined-ca-bundle\") pod \"3bae6b3a-baf0-4d07-9780-9efa866909a4\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.595525 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-config-data\") pod \"3bae6b3a-baf0-4d07-9780-9efa866909a4\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.595565 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-ceilometer-tls-certs\") pod \"3bae6b3a-baf0-4d07-9780-9efa866909a4\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.595612 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lqp4\" (UniqueName: \"kubernetes.io/projected/3bae6b3a-baf0-4d07-9780-9efa866909a4-kube-api-access-7lqp4\") pod \"3bae6b3a-baf0-4d07-9780-9efa866909a4\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.595699 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bae6b3a-baf0-4d07-9780-9efa866909a4-run-httpd\") pod \"3bae6b3a-baf0-4d07-9780-9efa866909a4\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.595759 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-scripts\") pod \"3bae6b3a-baf0-4d07-9780-9efa866909a4\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.595774 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-sg-core-conf-yaml\") pod \"3bae6b3a-baf0-4d07-9780-9efa866909a4\" (UID: \"3bae6b3a-baf0-4d07-9780-9efa866909a4\") " Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.600708 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bae6b3a-baf0-4d07-9780-9efa866909a4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3bae6b3a-baf0-4d07-9780-9efa866909a4" (UID: "3bae6b3a-baf0-4d07-9780-9efa866909a4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.607575 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bae6b3a-baf0-4d07-9780-9efa866909a4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3bae6b3a-baf0-4d07-9780-9efa866909a4" (UID: "3bae6b3a-baf0-4d07-9780-9efa866909a4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.627702 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-scripts" (OuterVolumeSpecName: "scripts") pod "3bae6b3a-baf0-4d07-9780-9efa866909a4" (UID: "3bae6b3a-baf0-4d07-9780-9efa866909a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.636617 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bae6b3a-baf0-4d07-9780-9efa866909a4-kube-api-access-7lqp4" (OuterVolumeSpecName: "kube-api-access-7lqp4") pod "3bae6b3a-baf0-4d07-9780-9efa866909a4" (UID: "3bae6b3a-baf0-4d07-9780-9efa866909a4"). InnerVolumeSpecName "kube-api-access-7lqp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.697299 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lqp4\" (UniqueName: \"kubernetes.io/projected/3bae6b3a-baf0-4d07-9780-9efa866909a4-kube-api-access-7lqp4\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.697334 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bae6b3a-baf0-4d07-9780-9efa866909a4-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.697343 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.697353 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bae6b3a-baf0-4d07-9780-9efa866909a4-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.722481 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3bae6b3a-baf0-4d07-9780-9efa866909a4" (UID: "3bae6b3a-baf0-4d07-9780-9efa866909a4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.742437 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bae6b3a-baf0-4d07-9780-9efa866909a4" (UID: "3bae6b3a-baf0-4d07-9780-9efa866909a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.748489 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.749927 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.779503 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3bae6b3a-baf0-4d07-9780-9efa866909a4" (UID: "3bae6b3a-baf0-4d07-9780-9efa866909a4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.805932 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.805987 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.805999 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.834355 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-config-data" (OuterVolumeSpecName: "config-data") pod "3bae6b3a-baf0-4d07-9780-9efa866909a4" (UID: "3bae6b3a-baf0-4d07-9780-9efa866909a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.849538 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:40 crc kubenswrapper[4796]: I1202 20:38:40.907812 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bae6b3a-baf0-4d07-9780-9efa866909a4-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.092668 4796 generic.go:334] "Generic (PLEG): container finished" podID="94d81d37-8e9d-47e1-b1de-415f2c7a5024" containerID="c992c846d3d65fb3bfdbb7512755392dfbb8bdc8f864186bf3acd298cf1aa3a3" exitCode=0 Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.092729 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jgr9z" event={"ID":"94d81d37-8e9d-47e1-b1de-415f2c7a5024","Type":"ContainerDied","Data":"c992c846d3d65fb3bfdbb7512755392dfbb8bdc8f864186bf3acd298cf1aa3a3"} Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.092756 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jgr9z" event={"ID":"94d81d37-8e9d-47e1-b1de-415f2c7a5024","Type":"ContainerStarted","Data":"d9d0a8886247821f0a73b3893e35cf9b82786578c21957155fbc80951683d874"} Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.096223 4796 generic.go:334] "Generic (PLEG): container finished" podID="c19eaf38-ecbd-469a-a0b3-78028623baac" containerID="78989ff6b440796ea937bd5933467798ff57dc131cfb36aae0fe286364e09a60" exitCode=0 Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.096415 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" event={"ID":"c19eaf38-ecbd-469a-a0b3-78028623baac","Type":"ContainerDied","Data":"78989ff6b440796ea937bd5933467798ff57dc131cfb36aae0fe286364e09a60"} Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.096471 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" event={"ID":"c19eaf38-ecbd-469a-a0b3-78028623baac","Type":"ContainerStarted","Data":"2a89701517a000289a6162ad2735cd147ca35e71f6f329dabadd0dcf08f3599b"} Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.099343 4796 generic.go:334] "Generic (PLEG): container finished" podID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerID="58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b" exitCode=0 Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.099581 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3bae6b3a-baf0-4d07-9780-9efa866909a4","Type":"ContainerDied","Data":"58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b"} Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.099642 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3bae6b3a-baf0-4d07-9780-9efa866909a4","Type":"ContainerDied","Data":"da77b7d620f87809ff543d11860ab9b867934caf6f1424a311e7974498288300"} Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.099607 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.099663 4796 scope.go:117] "RemoveContainer" containerID="b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.146843 4796 scope.go:117] "RemoveContainer" containerID="1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.169611 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.175847 4796 scope.go:117] "RemoveContainer" containerID="58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.188314 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.198442 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.207659 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:38:41 crc kubenswrapper[4796]: E1202 20:38:41.208010 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="ceilometer-central-agent" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.208025 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="ceilometer-central-agent" Dec 02 20:38:41 crc kubenswrapper[4796]: E1202 20:38:41.208038 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="ceilometer-notification-agent" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.208045 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="ceilometer-notification-agent" Dec 02 20:38:41 crc kubenswrapper[4796]: E1202 20:38:41.208063 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="proxy-httpd" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.208069 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="proxy-httpd" Dec 02 20:38:41 crc kubenswrapper[4796]: E1202 20:38:41.208093 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="sg-core" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.208099 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="sg-core" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.208266 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="sg-core" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.208290 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="ceilometer-notification-agent" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.208301 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="ceilometer-central-agent" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.208311 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" containerName="proxy-httpd" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.209758 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.213748 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.213799 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.214465 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.223438 4796 scope.go:117] "RemoveContainer" containerID="c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.228104 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.248086 4796 scope.go:117] "RemoveContainer" containerID="b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4" Dec 02 20:38:41 crc kubenswrapper[4796]: E1202 20:38:41.249536 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4\": container with ID starting with b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4 not found: ID does not exist" containerID="b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.249569 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4"} err="failed to get container status \"b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4\": rpc error: code = NotFound desc = could not find container \"b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4\": container with ID starting with b382e1e2409674ff8ec6ccccabfe02523817b1097af31815b893a0829137dcc4 not found: ID does not exist" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.249592 4796 scope.go:117] "RemoveContainer" containerID="1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0" Dec 02 20:38:41 crc kubenswrapper[4796]: E1202 20:38:41.251186 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0\": container with ID starting with 1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0 not found: ID does not exist" containerID="1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.251354 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0"} err="failed to get container status \"1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0\": rpc error: code = NotFound desc = could not find container \"1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0\": container with ID starting with 1408d50a88999b18193233c52a2d0eb39f0220dc68d7888e13df1d162c23a3e0 not found: ID does not exist" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.251437 4796 scope.go:117] "RemoveContainer" containerID="58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b" Dec 02 20:38:41 crc kubenswrapper[4796]: E1202 20:38:41.251903 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b\": container with ID starting with 58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b not found: ID does not exist" containerID="58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.251949 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b"} err="failed to get container status \"58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b\": rpc error: code = NotFound desc = could not find container \"58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b\": container with ID starting with 58843ae7b3f76bebb2f667695492447b5aebdd001419199931fd6c039e93ff2b not found: ID does not exist" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.251975 4796 scope.go:117] "RemoveContainer" containerID="c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366" Dec 02 20:38:41 crc kubenswrapper[4796]: E1202 20:38:41.252398 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366\": container with ID starting with c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366 not found: ID does not exist" containerID="c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.252560 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366"} err="failed to get container status \"c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366\": rpc error: code = NotFound desc = could not find container \"c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366\": container with ID starting with c331ac54f3a77026fca854d1bb23efa8a831963010f96decd4000c8bb12ec366 not found: ID does not exist" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.275943 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bae6b3a-baf0-4d07-9780-9efa866909a4" path="/var/lib/kubelet/pods/3bae6b3a-baf0-4d07-9780-9efa866909a4/volumes" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.316146 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804820f7-3638-4b4f-ab22-314e4012647f-log-httpd\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.316199 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.316225 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804820f7-3638-4b4f-ab22-314e4012647f-run-httpd\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.316262 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62whw\" (UniqueName: \"kubernetes.io/projected/804820f7-3638-4b4f-ab22-314e4012647f-kube-api-access-62whw\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.316354 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.316394 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.316630 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-config-data\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.316729 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-scripts\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.418557 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-config-data\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.418664 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-scripts\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.418745 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804820f7-3638-4b4f-ab22-314e4012647f-log-httpd\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.418777 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.418816 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804820f7-3638-4b4f-ab22-314e4012647f-run-httpd\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.418852 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62whw\" (UniqueName: \"kubernetes.io/projected/804820f7-3638-4b4f-ab22-314e4012647f-kube-api-access-62whw\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.418890 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.418924 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.420514 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804820f7-3638-4b4f-ab22-314e4012647f-run-httpd\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.420639 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804820f7-3638-4b4f-ab22-314e4012647f-log-httpd\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.424541 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.424574 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-scripts\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.424860 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.425775 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.427406 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-config-data\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.436986 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62whw\" (UniqueName: \"kubernetes.io/projected/804820f7-3638-4b4f-ab22-314e4012647f-kube-api-access-62whw\") pod \"ceilometer-0\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:41 crc kubenswrapper[4796]: I1202 20:38:41.546031 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.093029 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:38:42 crc kubenswrapper[4796]: W1202 20:38:42.102966 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod804820f7_3638_4b4f_ab22_314e4012647f.slice/crio-1daf09f2d90d260b8ea2f423bceb7461458192bb1bd7d49f7307202382a34a72 WatchSource:0}: Error finding container 1daf09f2d90d260b8ea2f423bceb7461458192bb1bd7d49f7307202382a34a72: Status 404 returned error can't find the container with id 1daf09f2d90d260b8ea2f423bceb7461458192bb1bd7d49f7307202382a34a72 Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.607652 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jgr9z" Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.665773 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.742854 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwmmt\" (UniqueName: \"kubernetes.io/projected/c19eaf38-ecbd-469a-a0b3-78028623baac-kube-api-access-fwmmt\") pod \"c19eaf38-ecbd-469a-a0b3-78028623baac\" (UID: \"c19eaf38-ecbd-469a-a0b3-78028623baac\") " Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.743053 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwjwg\" (UniqueName: \"kubernetes.io/projected/94d81d37-8e9d-47e1-b1de-415f2c7a5024-kube-api-access-vwjwg\") pod \"94d81d37-8e9d-47e1-b1de-415f2c7a5024\" (UID: \"94d81d37-8e9d-47e1-b1de-415f2c7a5024\") " Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.743096 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c19eaf38-ecbd-469a-a0b3-78028623baac-operator-scripts\") pod \"c19eaf38-ecbd-469a-a0b3-78028623baac\" (UID: \"c19eaf38-ecbd-469a-a0b3-78028623baac\") " Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.743119 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d81d37-8e9d-47e1-b1de-415f2c7a5024-operator-scripts\") pod \"94d81d37-8e9d-47e1-b1de-415f2c7a5024\" (UID: \"94d81d37-8e9d-47e1-b1de-415f2c7a5024\") " Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.744519 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c19eaf38-ecbd-469a-a0b3-78028623baac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c19eaf38-ecbd-469a-a0b3-78028623baac" (UID: "c19eaf38-ecbd-469a-a0b3-78028623baac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.744568 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d81d37-8e9d-47e1-b1de-415f2c7a5024-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94d81d37-8e9d-47e1-b1de-415f2c7a5024" (UID: "94d81d37-8e9d-47e1-b1de-415f2c7a5024"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.748519 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d81d37-8e9d-47e1-b1de-415f2c7a5024-kube-api-access-vwjwg" (OuterVolumeSpecName: "kube-api-access-vwjwg") pod "94d81d37-8e9d-47e1-b1de-415f2c7a5024" (UID: "94d81d37-8e9d-47e1-b1de-415f2c7a5024"). InnerVolumeSpecName "kube-api-access-vwjwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.760533 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c19eaf38-ecbd-469a-a0b3-78028623baac-kube-api-access-fwmmt" (OuterVolumeSpecName: "kube-api-access-fwmmt") pod "c19eaf38-ecbd-469a-a0b3-78028623baac" (UID: "c19eaf38-ecbd-469a-a0b3-78028623baac"). InnerVolumeSpecName "kube-api-access-fwmmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.845636 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwmmt\" (UniqueName: \"kubernetes.io/projected/c19eaf38-ecbd-469a-a0b3-78028623baac-kube-api-access-fwmmt\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.845680 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwjwg\" (UniqueName: \"kubernetes.io/projected/94d81d37-8e9d-47e1-b1de-415f2c7a5024-kube-api-access-vwjwg\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.845696 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c19eaf38-ecbd-469a-a0b3-78028623baac-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:42 crc kubenswrapper[4796]: I1202 20:38:42.845708 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d81d37-8e9d-47e1-b1de-415f2c7a5024-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:43 crc kubenswrapper[4796]: I1202 20:38:43.123692 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" event={"ID":"c19eaf38-ecbd-469a-a0b3-78028623baac","Type":"ContainerDied","Data":"2a89701517a000289a6162ad2735cd147ca35e71f6f329dabadd0dcf08f3599b"} Dec 02 20:38:43 crc kubenswrapper[4796]: I1202 20:38:43.124058 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a89701517a000289a6162ad2735cd147ca35e71f6f329dabadd0dcf08f3599b" Dec 02 20:38:43 crc kubenswrapper[4796]: I1202 20:38:43.123774 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9" Dec 02 20:38:43 crc kubenswrapper[4796]: I1202 20:38:43.125961 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jgr9z" event={"ID":"94d81d37-8e9d-47e1-b1de-415f2c7a5024","Type":"ContainerDied","Data":"d9d0a8886247821f0a73b3893e35cf9b82786578c21957155fbc80951683d874"} Dec 02 20:38:43 crc kubenswrapper[4796]: I1202 20:38:43.126073 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9d0a8886247821f0a73b3893e35cf9b82786578c21957155fbc80951683d874" Dec 02 20:38:43 crc kubenswrapper[4796]: I1202 20:38:43.126006 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jgr9z" Dec 02 20:38:43 crc kubenswrapper[4796]: I1202 20:38:43.128140 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"804820f7-3638-4b4f-ab22-314e4012647f","Type":"ContainerStarted","Data":"8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28"} Dec 02 20:38:43 crc kubenswrapper[4796]: I1202 20:38:43.128187 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"804820f7-3638-4b4f-ab22-314e4012647f","Type":"ContainerStarted","Data":"1daf09f2d90d260b8ea2f423bceb7461458192bb1bd7d49f7307202382a34a72"} Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.140447 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"804820f7-3638-4b4f-ab22-314e4012647f","Type":"ContainerStarted","Data":"53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04"} Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.378217 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-srvxv"] Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.378869 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-srvxv" podUID="c2bd55aa-8650-4335-93f1-82633f597f1a" containerName="registry-server" containerID="cri-o://8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554" gracePeriod=2 Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.690535 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks"] Dec 02 20:38:44 crc kubenswrapper[4796]: E1202 20:38:44.691156 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19eaf38-ecbd-469a-a0b3-78028623baac" containerName="mariadb-account-create-update" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.691171 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19eaf38-ecbd-469a-a0b3-78028623baac" containerName="mariadb-account-create-update" Dec 02 20:38:44 crc kubenswrapper[4796]: E1202 20:38:44.691209 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d81d37-8e9d-47e1-b1de-415f2c7a5024" containerName="mariadb-database-create" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.691217 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d81d37-8e9d-47e1-b1de-415f2c7a5024" containerName="mariadb-database-create" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.691399 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d81d37-8e9d-47e1-b1de-415f2c7a5024" containerName="mariadb-database-create" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.691426 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c19eaf38-ecbd-469a-a0b3-78028623baac" containerName="mariadb-account-create-update" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.692005 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.697556 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-b5th9" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.697902 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.726097 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks"] Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.792784 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9x5j\" (UniqueName: \"kubernetes.io/projected/c1e4eaa0-0211-4ce2-8b05-afb240242238-kube-api-access-b9x5j\") pod \"watcher-kuttl-db-sync-mr4ks\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.792888 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-config-data\") pod \"watcher-kuttl-db-sync-mr4ks\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.792919 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-mr4ks\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.793058 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-db-sync-config-data\") pod \"watcher-kuttl-db-sync-mr4ks\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.894758 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-db-sync-config-data\") pod \"watcher-kuttl-db-sync-mr4ks\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.894860 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9x5j\" (UniqueName: \"kubernetes.io/projected/c1e4eaa0-0211-4ce2-8b05-afb240242238-kube-api-access-b9x5j\") pod \"watcher-kuttl-db-sync-mr4ks\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.894896 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-config-data\") pod \"watcher-kuttl-db-sync-mr4ks\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.894919 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-mr4ks\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.901944 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-db-sync-config-data\") pod \"watcher-kuttl-db-sync-mr4ks\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.910242 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-mr4ks\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.927372 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9x5j\" (UniqueName: \"kubernetes.io/projected/c1e4eaa0-0211-4ce2-8b05-afb240242238-kube-api-access-b9x5j\") pod \"watcher-kuttl-db-sync-mr4ks\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:44 crc kubenswrapper[4796]: I1202 20:38:44.930725 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-config-data\") pod \"watcher-kuttl-db-sync-mr4ks\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.038160 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.061359 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.099858 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2bd55aa-8650-4335-93f1-82633f597f1a-catalog-content\") pod \"c2bd55aa-8650-4335-93f1-82633f597f1a\" (UID: \"c2bd55aa-8650-4335-93f1-82633f597f1a\") " Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.100009 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2bd55aa-8650-4335-93f1-82633f597f1a-utilities\") pod \"c2bd55aa-8650-4335-93f1-82633f597f1a\" (UID: \"c2bd55aa-8650-4335-93f1-82633f597f1a\") " Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.100107 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdm4q\" (UniqueName: \"kubernetes.io/projected/c2bd55aa-8650-4335-93f1-82633f597f1a-kube-api-access-cdm4q\") pod \"c2bd55aa-8650-4335-93f1-82633f597f1a\" (UID: \"c2bd55aa-8650-4335-93f1-82633f597f1a\") " Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.101843 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2bd55aa-8650-4335-93f1-82633f597f1a-utilities" (OuterVolumeSpecName: "utilities") pod "c2bd55aa-8650-4335-93f1-82633f597f1a" (UID: "c2bd55aa-8650-4335-93f1-82633f597f1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.112744 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2bd55aa-8650-4335-93f1-82633f597f1a-kube-api-access-cdm4q" (OuterVolumeSpecName: "kube-api-access-cdm4q") pod "c2bd55aa-8650-4335-93f1-82633f597f1a" (UID: "c2bd55aa-8650-4335-93f1-82633f597f1a"). InnerVolumeSpecName "kube-api-access-cdm4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.161523 4796 generic.go:334] "Generic (PLEG): container finished" podID="c2bd55aa-8650-4335-93f1-82633f597f1a" containerID="8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554" exitCode=0 Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.161679 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srvxv" event={"ID":"c2bd55aa-8650-4335-93f1-82633f597f1a","Type":"ContainerDied","Data":"8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554"} Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.161717 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srvxv" event={"ID":"c2bd55aa-8650-4335-93f1-82633f597f1a","Type":"ContainerDied","Data":"e31b4aada1fc0c9f877e5d552390df5b290628fcb58428f2b67a7d1380de22db"} Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.161739 4796 scope.go:117] "RemoveContainer" containerID="8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.161808 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-srvxv" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.170142 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"804820f7-3638-4b4f-ab22-314e4012647f","Type":"ContainerStarted","Data":"07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c"} Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.177099 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2bd55aa-8650-4335-93f1-82633f597f1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2bd55aa-8650-4335-93f1-82633f597f1a" (UID: "c2bd55aa-8650-4335-93f1-82633f597f1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.196741 4796 scope.go:117] "RemoveContainer" containerID="3293b99b9df40e3800f179bebf2fe033233948ae256e12def08de95c6c526f47" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.206710 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2bd55aa-8650-4335-93f1-82633f597f1a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.206731 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2bd55aa-8650-4335-93f1-82633f597f1a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.206740 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdm4q\" (UniqueName: \"kubernetes.io/projected/c2bd55aa-8650-4335-93f1-82633f597f1a-kube-api-access-cdm4q\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.242103 4796 scope.go:117] "RemoveContainer" containerID="ae4abcec7059169122fda6d4d1c856aa07919e556fa90db0523370229a51d780" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.268562 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:38:45 crc kubenswrapper[4796]: E1202 20:38:45.268740 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.268947 4796 scope.go:117] "RemoveContainer" containerID="8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554" Dec 02 20:38:45 crc kubenswrapper[4796]: E1202 20:38:45.274042 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554\": container with ID starting with 8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554 not found: ID does not exist" containerID="8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.274078 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554"} err="failed to get container status \"8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554\": rpc error: code = NotFound desc = could not find container \"8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554\": container with ID starting with 8cd31b344f193f835fe4fe57d35545b1a7be7057a4c7ba7ab3454433ea672554 not found: ID does not exist" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.274097 4796 scope.go:117] "RemoveContainer" containerID="3293b99b9df40e3800f179bebf2fe033233948ae256e12def08de95c6c526f47" Dec 02 20:38:45 crc kubenswrapper[4796]: E1202 20:38:45.278392 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3293b99b9df40e3800f179bebf2fe033233948ae256e12def08de95c6c526f47\": container with ID starting with 3293b99b9df40e3800f179bebf2fe033233948ae256e12def08de95c6c526f47 not found: ID does not exist" containerID="3293b99b9df40e3800f179bebf2fe033233948ae256e12def08de95c6c526f47" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.278437 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3293b99b9df40e3800f179bebf2fe033233948ae256e12def08de95c6c526f47"} err="failed to get container status \"3293b99b9df40e3800f179bebf2fe033233948ae256e12def08de95c6c526f47\": rpc error: code = NotFound desc = could not find container \"3293b99b9df40e3800f179bebf2fe033233948ae256e12def08de95c6c526f47\": container with ID starting with 3293b99b9df40e3800f179bebf2fe033233948ae256e12def08de95c6c526f47 not found: ID does not exist" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.278467 4796 scope.go:117] "RemoveContainer" containerID="ae4abcec7059169122fda6d4d1c856aa07919e556fa90db0523370229a51d780" Dec 02 20:38:45 crc kubenswrapper[4796]: E1202 20:38:45.279563 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4abcec7059169122fda6d4d1c856aa07919e556fa90db0523370229a51d780\": container with ID starting with ae4abcec7059169122fda6d4d1c856aa07919e556fa90db0523370229a51d780 not found: ID does not exist" containerID="ae4abcec7059169122fda6d4d1c856aa07919e556fa90db0523370229a51d780" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.279593 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4abcec7059169122fda6d4d1c856aa07919e556fa90db0523370229a51d780"} err="failed to get container status \"ae4abcec7059169122fda6d4d1c856aa07919e556fa90db0523370229a51d780\": rpc error: code = NotFound desc = could not find container \"ae4abcec7059169122fda6d4d1c856aa07919e556fa90db0523370229a51d780\": container with ID starting with ae4abcec7059169122fda6d4d1c856aa07919e556fa90db0523370229a51d780 not found: ID does not exist" Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.484625 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-srvxv"] Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.492674 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-srvxv"] Dec 02 20:38:45 crc kubenswrapper[4796]: I1202 20:38:45.535398 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks"] Dec 02 20:38:45 crc kubenswrapper[4796]: W1202 20:38:45.539429 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1e4eaa0_0211_4ce2_8b05_afb240242238.slice/crio-0a14d26f4bca01c9144fec91d7ff2c271946041937fb15e0188d6ed8399eb383 WatchSource:0}: Error finding container 0a14d26f4bca01c9144fec91d7ff2c271946041937fb15e0188d6ed8399eb383: Status 404 returned error can't find the container with id 0a14d26f4bca01c9144fec91d7ff2c271946041937fb15e0188d6ed8399eb383 Dec 02 20:38:46 crc kubenswrapper[4796]: I1202 20:38:46.187467 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" event={"ID":"c1e4eaa0-0211-4ce2-8b05-afb240242238","Type":"ContainerStarted","Data":"985476a941d6b8412885000eaed827b938002fd9d3f46536988dfe36ebbf9a44"} Dec 02 20:38:46 crc kubenswrapper[4796]: I1202 20:38:46.187888 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" event={"ID":"c1e4eaa0-0211-4ce2-8b05-afb240242238","Type":"ContainerStarted","Data":"0a14d26f4bca01c9144fec91d7ff2c271946041937fb15e0188d6ed8399eb383"} Dec 02 20:38:46 crc kubenswrapper[4796]: I1202 20:38:46.197810 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:38:46 crc kubenswrapper[4796]: I1202 20:38:46.204714 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" podStartSLOduration=2.204695676 podStartE2EDuration="2.204695676s" podCreationTimestamp="2025-12-02 20:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:38:46.202507534 +0000 UTC m=+1609.205883068" watchObservedRunningTime="2025-12-02 20:38:46.204695676 +0000 UTC m=+1609.208071210" Dec 02 20:38:46 crc kubenswrapper[4796]: I1202 20:38:46.227345 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.4201750020000001 podStartE2EDuration="5.22733211s" podCreationTimestamp="2025-12-02 20:38:41 +0000 UTC" firstStartedPulling="2025-12-02 20:38:42.108052438 +0000 UTC m=+1605.111427972" lastFinishedPulling="2025-12-02 20:38:45.915209556 +0000 UTC m=+1608.918585080" observedRunningTime="2025-12-02 20:38:46.22276774 +0000 UTC m=+1609.226143274" watchObservedRunningTime="2025-12-02 20:38:46.22733211 +0000 UTC m=+1609.230707644" Dec 02 20:38:47 crc kubenswrapper[4796]: I1202 20:38:47.210514 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"804820f7-3638-4b4f-ab22-314e4012647f","Type":"ContainerStarted","Data":"b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a"} Dec 02 20:38:47 crc kubenswrapper[4796]: I1202 20:38:47.277338 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2bd55aa-8650-4335-93f1-82633f597f1a" path="/var/lib/kubelet/pods/c2bd55aa-8650-4335-93f1-82633f597f1a/volumes" Dec 02 20:38:48 crc kubenswrapper[4796]: I1202 20:38:48.227783 4796 generic.go:334] "Generic (PLEG): container finished" podID="c1e4eaa0-0211-4ce2-8b05-afb240242238" containerID="985476a941d6b8412885000eaed827b938002fd9d3f46536988dfe36ebbf9a44" exitCode=0 Dec 02 20:38:48 crc kubenswrapper[4796]: I1202 20:38:48.228718 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" event={"ID":"c1e4eaa0-0211-4ce2-8b05-afb240242238","Type":"ContainerDied","Data":"985476a941d6b8412885000eaed827b938002fd9d3f46536988dfe36ebbf9a44"} Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.613320 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.685841 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-combined-ca-bundle\") pod \"c1e4eaa0-0211-4ce2-8b05-afb240242238\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.686073 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9x5j\" (UniqueName: \"kubernetes.io/projected/c1e4eaa0-0211-4ce2-8b05-afb240242238-kube-api-access-b9x5j\") pod \"c1e4eaa0-0211-4ce2-8b05-afb240242238\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.686099 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-db-sync-config-data\") pod \"c1e4eaa0-0211-4ce2-8b05-afb240242238\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.686120 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-config-data\") pod \"c1e4eaa0-0211-4ce2-8b05-afb240242238\" (UID: \"c1e4eaa0-0211-4ce2-8b05-afb240242238\") " Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.707521 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c1e4eaa0-0211-4ce2-8b05-afb240242238" (UID: "c1e4eaa0-0211-4ce2-8b05-afb240242238"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.707527 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e4eaa0-0211-4ce2-8b05-afb240242238-kube-api-access-b9x5j" (OuterVolumeSpecName: "kube-api-access-b9x5j") pod "c1e4eaa0-0211-4ce2-8b05-afb240242238" (UID: "c1e4eaa0-0211-4ce2-8b05-afb240242238"). InnerVolumeSpecName "kube-api-access-b9x5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.786404 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-config-data" (OuterVolumeSpecName: "config-data") pod "c1e4eaa0-0211-4ce2-8b05-afb240242238" (UID: "c1e4eaa0-0211-4ce2-8b05-afb240242238"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.789183 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9x5j\" (UniqueName: \"kubernetes.io/projected/c1e4eaa0-0211-4ce2-8b05-afb240242238-kube-api-access-b9x5j\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.789218 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.789227 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.793580 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1e4eaa0-0211-4ce2-8b05-afb240242238" (UID: "c1e4eaa0-0211-4ce2-8b05-afb240242238"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:38:49 crc kubenswrapper[4796]: I1202 20:38:49.890830 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e4eaa0-0211-4ce2-8b05-afb240242238-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.269652 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" event={"ID":"c1e4eaa0-0211-4ce2-8b05-afb240242238","Type":"ContainerDied","Data":"0a14d26f4bca01c9144fec91d7ff2c271946041937fb15e0188d6ed8399eb383"} Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.269695 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a14d26f4bca01c9144fec91d7ff2c271946041937fb15e0188d6ed8399eb383" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.269736 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.550326 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:38:50 crc kubenswrapper[4796]: E1202 20:38:50.550740 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e4eaa0-0211-4ce2-8b05-afb240242238" containerName="watcher-kuttl-db-sync" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.550762 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e4eaa0-0211-4ce2-8b05-afb240242238" containerName="watcher-kuttl-db-sync" Dec 02 20:38:50 crc kubenswrapper[4796]: E1202 20:38:50.550775 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2bd55aa-8650-4335-93f1-82633f597f1a" containerName="extract-content" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.550786 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2bd55aa-8650-4335-93f1-82633f597f1a" containerName="extract-content" Dec 02 20:38:50 crc kubenswrapper[4796]: E1202 20:38:50.550801 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2bd55aa-8650-4335-93f1-82633f597f1a" containerName="registry-server" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.550809 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2bd55aa-8650-4335-93f1-82633f597f1a" containerName="registry-server" Dec 02 20:38:50 crc kubenswrapper[4796]: E1202 20:38:50.550823 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2bd55aa-8650-4335-93f1-82633f597f1a" containerName="extract-utilities" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.550831 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2bd55aa-8650-4335-93f1-82633f597f1a" containerName="extract-utilities" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.551034 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2bd55aa-8650-4335-93f1-82633f597f1a" containerName="registry-server" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.551057 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e4eaa0-0211-4ce2-8b05-afb240242238" containerName="watcher-kuttl-db-sync" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.552129 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.558985 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.560815 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-b5th9" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.567070 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.603577 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98s6w\" (UniqueName: \"kubernetes.io/projected/f393f072-a05c-4fcb-a197-434c890e5dd6-kube-api-access-98s6w\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.603634 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f393f072-a05c-4fcb-a197-434c890e5dd6-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.603672 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.603701 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.603728 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.603759 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.673506 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.674520 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.679520 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.705316 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.705379 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.705449 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98s6w\" (UniqueName: \"kubernetes.io/projected/f393f072-a05c-4fcb-a197-434c890e5dd6-kube-api-access-98s6w\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.705559 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f393f072-a05c-4fcb-a197-434c890e5dd6-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.705647 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.705736 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.705768 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.705841 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.705866 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.705900 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.705980 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fj7b\" (UniqueName: \"kubernetes.io/projected/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-kube-api-access-8fj7b\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.707124 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f393f072-a05c-4fcb-a197-434c890e5dd6-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.712954 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.712997 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.714807 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.717121 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.721006 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.754600 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98s6w\" (UniqueName: \"kubernetes.io/projected/f393f072-a05c-4fcb-a197-434c890e5dd6-kube-api-access-98s6w\") pod \"watcher-kuttl-api-0\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.759198 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.764128 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.766228 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.787337 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.807334 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.807375 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.807399 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.807433 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fj7b\" (UniqueName: \"kubernetes.io/projected/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-kube-api-access-8fj7b\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.807454 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.807480 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.807519 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.807548 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de9cf93c-00d2-489c-9ea7-e006e692e9be-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.807572 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hzlb\" (UniqueName: \"kubernetes.io/projected/de9cf93c-00d2-489c-9ea7-e006e692e9be-kube-api-access-7hzlb\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.807619 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.807642 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.811408 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.812054 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.813088 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.813970 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.823173 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fj7b\" (UniqueName: \"kubernetes.io/projected/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-kube-api-access-8fj7b\") pod \"watcher-kuttl-applier-0\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.897158 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.911015 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hzlb\" (UniqueName: \"kubernetes.io/projected/de9cf93c-00d2-489c-9ea7-e006e692e9be-kube-api-access-7hzlb\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.911189 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.911243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.915680 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.915790 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.915939 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de9cf93c-00d2-489c-9ea7-e006e692e9be-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.916862 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de9cf93c-00d2-489c-9ea7-e006e692e9be-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.918112 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.919727 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.919977 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.922802 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.929933 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hzlb\" (UniqueName: \"kubernetes.io/projected/de9cf93c-00d2-489c-9ea7-e006e692e9be-kube-api-access-7hzlb\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:50 crc kubenswrapper[4796]: I1202 20:38:50.990725 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:51 crc kubenswrapper[4796]: I1202 20:38:51.112593 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:38:51 crc kubenswrapper[4796]: I1202 20:38:51.488987 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:38:51 crc kubenswrapper[4796]: I1202 20:38:51.600106 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:38:51 crc kubenswrapper[4796]: W1202 20:38:51.610569 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf704b1c0_8f43_442d_85fd_1ed65c91ac3a.slice/crio-40a10f1cf7069a86b7a3c6319988bf928a10c3cd0913a322b6bfa04edcf0fcdd WatchSource:0}: Error finding container 40a10f1cf7069a86b7a3c6319988bf928a10c3cd0913a322b6bfa04edcf0fcdd: Status 404 returned error can't find the container with id 40a10f1cf7069a86b7a3c6319988bf928a10c3cd0913a322b6bfa04edcf0fcdd Dec 02 20:38:51 crc kubenswrapper[4796]: W1202 20:38:51.672725 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde9cf93c_00d2_489c_9ea7_e006e692e9be.slice/crio-42591179de814cfd3392406703b2d2c529be76a15654f62f2e6e8f034677ae7e WatchSource:0}: Error finding container 42591179de814cfd3392406703b2d2c529be76a15654f62f2e6e8f034677ae7e: Status 404 returned error can't find the container with id 42591179de814cfd3392406703b2d2c529be76a15654f62f2e6e8f034677ae7e Dec 02 20:38:51 crc kubenswrapper[4796]: I1202 20:38:51.675039 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:38:52 crc kubenswrapper[4796]: I1202 20:38:52.299735 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"de9cf93c-00d2-489c-9ea7-e006e692e9be","Type":"ContainerStarted","Data":"72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba"} Dec 02 20:38:52 crc kubenswrapper[4796]: I1202 20:38:52.300136 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"de9cf93c-00d2-489c-9ea7-e006e692e9be","Type":"ContainerStarted","Data":"42591179de814cfd3392406703b2d2c529be76a15654f62f2e6e8f034677ae7e"} Dec 02 20:38:52 crc kubenswrapper[4796]: I1202 20:38:52.304551 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f704b1c0-8f43-442d-85fd-1ed65c91ac3a","Type":"ContainerStarted","Data":"2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf"} Dec 02 20:38:52 crc kubenswrapper[4796]: I1202 20:38:52.304617 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f704b1c0-8f43-442d-85fd-1ed65c91ac3a","Type":"ContainerStarted","Data":"40a10f1cf7069a86b7a3c6319988bf928a10c3cd0913a322b6bfa04edcf0fcdd"} Dec 02 20:38:52 crc kubenswrapper[4796]: I1202 20:38:52.309894 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f393f072-a05c-4fcb-a197-434c890e5dd6","Type":"ContainerStarted","Data":"17f6092877c3089abd982c52e8ff604ec306860fa444c1b5984717e69b860aec"} Dec 02 20:38:52 crc kubenswrapper[4796]: I1202 20:38:52.309938 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f393f072-a05c-4fcb-a197-434c890e5dd6","Type":"ContainerStarted","Data":"e82a697072cd554df539c6e1c4ab57711af2c8c339863bb7980760c57eccec5e"} Dec 02 20:38:52 crc kubenswrapper[4796]: I1202 20:38:52.309949 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f393f072-a05c-4fcb-a197-434c890e5dd6","Type":"ContainerStarted","Data":"2fcb0265f65629d4a7b8833f596110ab2682030ecb74dab5d537628d60532e78"} Dec 02 20:38:52 crc kubenswrapper[4796]: I1202 20:38:52.310562 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:52 crc kubenswrapper[4796]: I1202 20:38:52.323506 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.323487628 podStartE2EDuration="2.323487628s" podCreationTimestamp="2025-12-02 20:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:38:52.317066785 +0000 UTC m=+1615.320442359" watchObservedRunningTime="2025-12-02 20:38:52.323487628 +0000 UTC m=+1615.326863172" Dec 02 20:38:52 crc kubenswrapper[4796]: I1202 20:38:52.347206 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.347181308 podStartE2EDuration="2.347181308s" podCreationTimestamp="2025-12-02 20:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:38:52.334611466 +0000 UTC m=+1615.337987020" watchObservedRunningTime="2025-12-02 20:38:52.347181308 +0000 UTC m=+1615.350556842" Dec 02 20:38:52 crc kubenswrapper[4796]: I1202 20:38:52.369609 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.369587397 podStartE2EDuration="2.369587397s" podCreationTimestamp="2025-12-02 20:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:38:52.359814292 +0000 UTC m=+1615.363189826" watchObservedRunningTime="2025-12-02 20:38:52.369587397 +0000 UTC m=+1615.372962931" Dec 02 20:38:53 crc kubenswrapper[4796]: I1202 20:38:53.144702 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:38:54 crc kubenswrapper[4796]: I1202 20:38:54.330673 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:38:54 crc kubenswrapper[4796]: I1202 20:38:54.986315 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:55 crc kubenswrapper[4796]: I1202 20:38:55.570072 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:38:55 crc kubenswrapper[4796]: I1202 20:38:55.898291 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:38:55 crc kubenswrapper[4796]: I1202 20:38:55.991802 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:38:56 crc kubenswrapper[4796]: I1202 20:38:56.765285 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:38:58 crc kubenswrapper[4796]: I1202 20:38:58.023676 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:38:59 crc kubenswrapper[4796]: I1202 20:38:59.244397 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:38:59 crc kubenswrapper[4796]: I1202 20:38:59.265751 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:38:59 crc kubenswrapper[4796]: E1202 20:38:59.266059 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:39:00 crc kubenswrapper[4796]: I1202 20:39:00.487188 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:00 crc kubenswrapper[4796]: I1202 20:39:00.898163 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:39:00 crc kubenswrapper[4796]: I1202 20:39:00.908756 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:39:00 crc kubenswrapper[4796]: I1202 20:39:00.992283 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:39:01 crc kubenswrapper[4796]: I1202 20:39:01.033313 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:39:01 crc kubenswrapper[4796]: I1202 20:39:01.113321 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:39:01 crc kubenswrapper[4796]: I1202 20:39:01.146697 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:39:01 crc kubenswrapper[4796]: I1202 20:39:01.419589 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:39:01 crc kubenswrapper[4796]: I1202 20:39:01.425325 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:39:01 crc kubenswrapper[4796]: I1202 20:39:01.475052 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:39:01 crc kubenswrapper[4796]: I1202 20:39:01.480778 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:39:01 crc kubenswrapper[4796]: I1202 20:39:01.707764 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:02 crc kubenswrapper[4796]: I1202 20:39:02.965197 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.238978 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.649269 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-create-hd95z"] Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.650688 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-hd95z" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.660497 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-hd95z"] Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.747934 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c"] Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.750180 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.753000 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456f8361-9b09-40dd-943c-90191091aef2-operator-scripts\") pod \"cinder-db-create-hd95z\" (UID: \"456f8361-9b09-40dd-943c-90191091aef2\") " pod="watcher-kuttl-default/cinder-db-create-hd95z" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.753081 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94jrz\" (UniqueName: \"kubernetes.io/projected/456f8361-9b09-40dd-943c-90191091aef2-kube-api-access-94jrz\") pod \"cinder-db-create-hd95z\" (UID: \"456f8361-9b09-40dd-943c-90191091aef2\") " pod="watcher-kuttl-default/cinder-db-create-hd95z" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.754796 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-db-secret" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.756701 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c"] Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.855842 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c79c74cf-8574-42e6-954b-42455083b729-operator-scripts\") pod \"cinder-4f01-account-create-update-zwh9c\" (UID: \"c79c74cf-8574-42e6-954b-42455083b729\") " pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.855931 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr5xp\" (UniqueName: \"kubernetes.io/projected/c79c74cf-8574-42e6-954b-42455083b729-kube-api-access-kr5xp\") pod \"cinder-4f01-account-create-update-zwh9c\" (UID: \"c79c74cf-8574-42e6-954b-42455083b729\") " pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.856284 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456f8361-9b09-40dd-943c-90191091aef2-operator-scripts\") pod \"cinder-db-create-hd95z\" (UID: \"456f8361-9b09-40dd-943c-90191091aef2\") " pod="watcher-kuttl-default/cinder-db-create-hd95z" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.856385 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94jrz\" (UniqueName: \"kubernetes.io/projected/456f8361-9b09-40dd-943c-90191091aef2-kube-api-access-94jrz\") pod \"cinder-db-create-hd95z\" (UID: \"456f8361-9b09-40dd-943c-90191091aef2\") " pod="watcher-kuttl-default/cinder-db-create-hd95z" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.857434 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456f8361-9b09-40dd-943c-90191091aef2-operator-scripts\") pod \"cinder-db-create-hd95z\" (UID: \"456f8361-9b09-40dd-943c-90191091aef2\") " pod="watcher-kuttl-default/cinder-db-create-hd95z" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.878512 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94jrz\" (UniqueName: \"kubernetes.io/projected/456f8361-9b09-40dd-943c-90191091aef2-kube-api-access-94jrz\") pod \"cinder-db-create-hd95z\" (UID: \"456f8361-9b09-40dd-943c-90191091aef2\") " pod="watcher-kuttl-default/cinder-db-create-hd95z" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.959359 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c79c74cf-8574-42e6-954b-42455083b729-operator-scripts\") pod \"cinder-4f01-account-create-update-zwh9c\" (UID: \"c79c74cf-8574-42e6-954b-42455083b729\") " pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.959744 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr5xp\" (UniqueName: \"kubernetes.io/projected/c79c74cf-8574-42e6-954b-42455083b729-kube-api-access-kr5xp\") pod \"cinder-4f01-account-create-update-zwh9c\" (UID: \"c79c74cf-8574-42e6-954b-42455083b729\") " pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.960154 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c79c74cf-8574-42e6-954b-42455083b729-operator-scripts\") pod \"cinder-4f01-account-create-update-zwh9c\" (UID: \"c79c74cf-8574-42e6-954b-42455083b729\") " pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.962459 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.962827 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="ceilometer-central-agent" containerID="cri-o://8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28" gracePeriod=30 Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.963000 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="ceilometer-notification-agent" containerID="cri-o://53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04" gracePeriod=30 Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.963041 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="proxy-httpd" containerID="cri-o://b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a" gracePeriod=30 Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.962979 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="sg-core" containerID="cri-o://07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c" gracePeriod=30 Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.989228 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr5xp\" (UniqueName: \"kubernetes.io/projected/c79c74cf-8574-42e6-954b-42455083b729-kube-api-access-kr5xp\") pod \"cinder-4f01-account-create-update-zwh9c\" (UID: \"c79c74cf-8574-42e6-954b-42455083b729\") " pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.991020 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-hd95z" Dec 02 20:39:03 crc kubenswrapper[4796]: I1202 20:39:03.996553 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 02 20:39:04 crc kubenswrapper[4796]: I1202 20:39:04.071198 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" Dec 02 20:39:04 crc kubenswrapper[4796]: I1202 20:39:04.456608 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:04 crc kubenswrapper[4796]: I1202 20:39:04.466212 4796 generic.go:334] "Generic (PLEG): container finished" podID="804820f7-3638-4b4f-ab22-314e4012647f" containerID="b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a" exitCode=0 Dec 02 20:39:04 crc kubenswrapper[4796]: I1202 20:39:04.466279 4796 generic.go:334] "Generic (PLEG): container finished" podID="804820f7-3638-4b4f-ab22-314e4012647f" containerID="07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c" exitCode=2 Dec 02 20:39:04 crc kubenswrapper[4796]: I1202 20:39:04.466325 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"804820f7-3638-4b4f-ab22-314e4012647f","Type":"ContainerDied","Data":"b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a"} Dec 02 20:39:04 crc kubenswrapper[4796]: I1202 20:39:04.466385 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"804820f7-3638-4b4f-ab22-314e4012647f","Type":"ContainerDied","Data":"07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c"} Dec 02 20:39:04 crc kubenswrapper[4796]: I1202 20:39:04.564081 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-hd95z"] Dec 02 20:39:04 crc kubenswrapper[4796]: I1202 20:39:04.637153 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c"] Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.419732 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.498174 4796 generic.go:334] "Generic (PLEG): container finished" podID="c79c74cf-8574-42e6-954b-42455083b729" containerID="bb23a8dd02fc62064058c3f08dfe276714c86c9f01f3bb81e50bd65734f03a95" exitCode=0 Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.498311 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" event={"ID":"c79c74cf-8574-42e6-954b-42455083b729","Type":"ContainerDied","Data":"bb23a8dd02fc62064058c3f08dfe276714c86c9f01f3bb81e50bd65734f03a95"} Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.498387 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" event={"ID":"c79c74cf-8574-42e6-954b-42455083b729","Type":"ContainerStarted","Data":"f7a2e86afdf091be0685e2e2e3b0bfbc8da0b799e5aeeaec28b233cc46ffca4c"} Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.499885 4796 generic.go:334] "Generic (PLEG): container finished" podID="456f8361-9b09-40dd-943c-90191091aef2" containerID="ab16702cdb1959f03fc3178301e16c70f8a51cef591e86b7f1e3777f639debf5" exitCode=0 Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.499946 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-hd95z" event={"ID":"456f8361-9b09-40dd-943c-90191091aef2","Type":"ContainerDied","Data":"ab16702cdb1959f03fc3178301e16c70f8a51cef591e86b7f1e3777f639debf5"} Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.499975 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-hd95z" event={"ID":"456f8361-9b09-40dd-943c-90191091aef2","Type":"ContainerStarted","Data":"72deee44925f0925eefe4e17f16588c887a4c00c66db7c3e3bcca9ed8538da56"} Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.503839 4796 generic.go:334] "Generic (PLEG): container finished" podID="804820f7-3638-4b4f-ab22-314e4012647f" containerID="53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04" exitCode=0 Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.503865 4796 generic.go:334] "Generic (PLEG): container finished" podID="804820f7-3638-4b4f-ab22-314e4012647f" containerID="8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28" exitCode=0 Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.503885 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"804820f7-3638-4b4f-ab22-314e4012647f","Type":"ContainerDied","Data":"53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04"} Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.503905 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"804820f7-3638-4b4f-ab22-314e4012647f","Type":"ContainerDied","Data":"8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28"} Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.503915 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"804820f7-3638-4b4f-ab22-314e4012647f","Type":"ContainerDied","Data":"1daf09f2d90d260b8ea2f423bceb7461458192bb1bd7d49f7307202382a34a72"} Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.503930 4796 scope.go:117] "RemoveContainer" containerID="b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.504051 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.534969 4796 scope.go:117] "RemoveContainer" containerID="07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.558387 4796 scope.go:117] "RemoveContainer" containerID="53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.579238 4796 scope.go:117] "RemoveContainer" containerID="8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.589021 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-scripts\") pod \"804820f7-3638-4b4f-ab22-314e4012647f\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.589084 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-sg-core-conf-yaml\") pod \"804820f7-3638-4b4f-ab22-314e4012647f\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.589988 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-combined-ca-bundle\") pod \"804820f7-3638-4b4f-ab22-314e4012647f\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.590040 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804820f7-3638-4b4f-ab22-314e4012647f-run-httpd\") pod \"804820f7-3638-4b4f-ab22-314e4012647f\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.590081 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62whw\" (UniqueName: \"kubernetes.io/projected/804820f7-3638-4b4f-ab22-314e4012647f-kube-api-access-62whw\") pod \"804820f7-3638-4b4f-ab22-314e4012647f\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.590144 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804820f7-3638-4b4f-ab22-314e4012647f-log-httpd\") pod \"804820f7-3638-4b4f-ab22-314e4012647f\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.590221 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-ceilometer-tls-certs\") pod \"804820f7-3638-4b4f-ab22-314e4012647f\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.590436 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804820f7-3638-4b4f-ab22-314e4012647f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "804820f7-3638-4b4f-ab22-314e4012647f" (UID: "804820f7-3638-4b4f-ab22-314e4012647f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.590470 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-config-data\") pod \"804820f7-3638-4b4f-ab22-314e4012647f\" (UID: \"804820f7-3638-4b4f-ab22-314e4012647f\") " Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.590689 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804820f7-3638-4b4f-ab22-314e4012647f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "804820f7-3638-4b4f-ab22-314e4012647f" (UID: "804820f7-3638-4b4f-ab22-314e4012647f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.592281 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804820f7-3638-4b4f-ab22-314e4012647f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.592301 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804820f7-3638-4b4f-ab22-314e4012647f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.596515 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804820f7-3638-4b4f-ab22-314e4012647f-kube-api-access-62whw" (OuterVolumeSpecName: "kube-api-access-62whw") pod "804820f7-3638-4b4f-ab22-314e4012647f" (UID: "804820f7-3638-4b4f-ab22-314e4012647f"). InnerVolumeSpecName "kube-api-access-62whw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.597865 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-scripts" (OuterVolumeSpecName: "scripts") pod "804820f7-3638-4b4f-ab22-314e4012647f" (UID: "804820f7-3638-4b4f-ab22-314e4012647f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.607928 4796 scope.go:117] "RemoveContainer" containerID="b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a" Dec 02 20:39:05 crc kubenswrapper[4796]: E1202 20:39:05.608396 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a\": container with ID starting with b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a not found: ID does not exist" containerID="b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.608440 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a"} err="failed to get container status \"b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a\": rpc error: code = NotFound desc = could not find container \"b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a\": container with ID starting with b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a not found: ID does not exist" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.608471 4796 scope.go:117] "RemoveContainer" containerID="07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c" Dec 02 20:39:05 crc kubenswrapper[4796]: E1202 20:39:05.609063 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c\": container with ID starting with 07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c not found: ID does not exist" containerID="07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.609096 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c"} err="failed to get container status \"07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c\": rpc error: code = NotFound desc = could not find container \"07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c\": container with ID starting with 07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c not found: ID does not exist" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.609110 4796 scope.go:117] "RemoveContainer" containerID="53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04" Dec 02 20:39:05 crc kubenswrapper[4796]: E1202 20:39:05.609534 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04\": container with ID starting with 53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04 not found: ID does not exist" containerID="53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.609560 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04"} err="failed to get container status \"53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04\": rpc error: code = NotFound desc = could not find container \"53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04\": container with ID starting with 53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04 not found: ID does not exist" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.609578 4796 scope.go:117] "RemoveContainer" containerID="8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28" Dec 02 20:39:05 crc kubenswrapper[4796]: E1202 20:39:05.610020 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28\": container with ID starting with 8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28 not found: ID does not exist" containerID="8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.610044 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28"} err="failed to get container status \"8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28\": rpc error: code = NotFound desc = could not find container \"8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28\": container with ID starting with 8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28 not found: ID does not exist" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.610056 4796 scope.go:117] "RemoveContainer" containerID="b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.610733 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a"} err="failed to get container status \"b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a\": rpc error: code = NotFound desc = could not find container \"b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a\": container with ID starting with b0b3203801006fd1c81f767a62cd5127d7e1a61c3ca04a1071059c899275f87a not found: ID does not exist" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.610756 4796 scope.go:117] "RemoveContainer" containerID="07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.610974 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c"} err="failed to get container status \"07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c\": rpc error: code = NotFound desc = could not find container \"07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c\": container with ID starting with 07aac40592f468c6ce1bd0b66307efbd8c749174a1f0af8357080305c246772c not found: ID does not exist" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.610997 4796 scope.go:117] "RemoveContainer" containerID="53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.611266 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04"} err="failed to get container status \"53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04\": rpc error: code = NotFound desc = could not find container \"53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04\": container with ID starting with 53334c3e87453282a51362d81f150369b4deccae0948cb90dfe1e571ae809c04 not found: ID does not exist" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.611286 4796 scope.go:117] "RemoveContainer" containerID="8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.611560 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28"} err="failed to get container status \"8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28\": rpc error: code = NotFound desc = could not find container \"8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28\": container with ID starting with 8479e779c317227471921d9f5e3992e86f7c6301f264f11cee4c0797a0763a28 not found: ID does not exist" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.618975 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "804820f7-3638-4b4f-ab22-314e4012647f" (UID: "804820f7-3638-4b4f-ab22-314e4012647f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.649037 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "804820f7-3638-4b4f-ab22-314e4012647f" (UID: "804820f7-3638-4b4f-ab22-314e4012647f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.660601 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "804820f7-3638-4b4f-ab22-314e4012647f" (UID: "804820f7-3638-4b4f-ab22-314e4012647f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.688974 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-config-data" (OuterVolumeSpecName: "config-data") pod "804820f7-3638-4b4f-ab22-314e4012647f" (UID: "804820f7-3638-4b4f-ab22-314e4012647f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.693994 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.694033 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.694042 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.694053 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.694064 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62whw\" (UniqueName: \"kubernetes.io/projected/804820f7-3638-4b4f-ab22-314e4012647f-kube-api-access-62whw\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.694072 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/804820f7-3638-4b4f-ab22-314e4012647f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.749314 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.843345 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.849900 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.866163 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:39:05 crc kubenswrapper[4796]: E1202 20:39:05.866593 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="ceilometer-notification-agent" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.866617 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="ceilometer-notification-agent" Dec 02 20:39:05 crc kubenswrapper[4796]: E1202 20:39:05.866634 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="proxy-httpd" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.866645 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="proxy-httpd" Dec 02 20:39:05 crc kubenswrapper[4796]: E1202 20:39:05.866667 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="sg-core" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.866676 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="sg-core" Dec 02 20:39:05 crc kubenswrapper[4796]: E1202 20:39:05.866704 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="ceilometer-central-agent" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.866714 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="ceilometer-central-agent" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.866927 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="sg-core" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.866950 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="proxy-httpd" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.866974 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="ceilometer-central-agent" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.866987 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="804820f7-3638-4b4f-ab22-314e4012647f" containerName="ceilometer-notification-agent" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.869108 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.872080 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.872691 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.873065 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:39:05 crc kubenswrapper[4796]: I1202 20:39:05.910305 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.008237 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb267cf2-7c07-4014-b06c-d34b5517f6ef-run-httpd\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.008318 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-config-data\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.008342 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlqsk\" (UniqueName: \"kubernetes.io/projected/eb267cf2-7c07-4014-b06c-d34b5517f6ef-kube-api-access-vlqsk\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.008371 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb267cf2-7c07-4014-b06c-d34b5517f6ef-log-httpd\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.008402 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.008441 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.008482 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.008510 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-scripts\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.110274 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb267cf2-7c07-4014-b06c-d34b5517f6ef-run-httpd\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.110334 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-config-data\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.110351 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlqsk\" (UniqueName: \"kubernetes.io/projected/eb267cf2-7c07-4014-b06c-d34b5517f6ef-kube-api-access-vlqsk\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.110372 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb267cf2-7c07-4014-b06c-d34b5517f6ef-log-httpd\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.110397 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.110423 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.110451 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.110476 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-scripts\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.111016 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb267cf2-7c07-4014-b06c-d34b5517f6ef-run-httpd\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.111868 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb267cf2-7c07-4014-b06c-d34b5517f6ef-log-httpd\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.116295 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-scripts\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.121458 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.122449 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-config-data\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.128181 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.128559 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.137803 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlqsk\" (UniqueName: \"kubernetes.io/projected/eb267cf2-7c07-4014-b06c-d34b5517f6ef-kube-api-access-vlqsk\") pod \"ceilometer-0\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.196555 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.705040 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.869490 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-hd95z" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.935709 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456f8361-9b09-40dd-943c-90191091aef2-operator-scripts\") pod \"456f8361-9b09-40dd-943c-90191091aef2\" (UID: \"456f8361-9b09-40dd-943c-90191091aef2\") " Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.935981 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94jrz\" (UniqueName: \"kubernetes.io/projected/456f8361-9b09-40dd-943c-90191091aef2-kube-api-access-94jrz\") pod \"456f8361-9b09-40dd-943c-90191091aef2\" (UID: \"456f8361-9b09-40dd-943c-90191091aef2\") " Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.949280 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/456f8361-9b09-40dd-943c-90191091aef2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "456f8361-9b09-40dd-943c-90191091aef2" (UID: "456f8361-9b09-40dd-943c-90191091aef2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:39:06 crc kubenswrapper[4796]: I1202 20:39:06.976551 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456f8361-9b09-40dd-943c-90191091aef2-kube-api-access-94jrz" (OuterVolumeSpecName: "kube-api-access-94jrz") pod "456f8361-9b09-40dd-943c-90191091aef2" (UID: "456f8361-9b09-40dd-943c-90191091aef2"). InnerVolumeSpecName "kube-api-access-94jrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.009830 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.033461 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.053052 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94jrz\" (UniqueName: \"kubernetes.io/projected/456f8361-9b09-40dd-943c-90191091aef2-kube-api-access-94jrz\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.053084 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456f8361-9b09-40dd-943c-90191091aef2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.153545 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c79c74cf-8574-42e6-954b-42455083b729-operator-scripts\") pod \"c79c74cf-8574-42e6-954b-42455083b729\" (UID: \"c79c74cf-8574-42e6-954b-42455083b729\") " Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.153653 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr5xp\" (UniqueName: \"kubernetes.io/projected/c79c74cf-8574-42e6-954b-42455083b729-kube-api-access-kr5xp\") pod \"c79c74cf-8574-42e6-954b-42455083b729\" (UID: \"c79c74cf-8574-42e6-954b-42455083b729\") " Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.154282 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c79c74cf-8574-42e6-954b-42455083b729-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c79c74cf-8574-42e6-954b-42455083b729" (UID: "c79c74cf-8574-42e6-954b-42455083b729"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.159462 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79c74cf-8574-42e6-954b-42455083b729-kube-api-access-kr5xp" (OuterVolumeSpecName: "kube-api-access-kr5xp") pod "c79c74cf-8574-42e6-954b-42455083b729" (UID: "c79c74cf-8574-42e6-954b-42455083b729"). InnerVolumeSpecName "kube-api-access-kr5xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.255412 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c79c74cf-8574-42e6-954b-42455083b729-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.255447 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr5xp\" (UniqueName: \"kubernetes.io/projected/c79c74cf-8574-42e6-954b-42455083b729-kube-api-access-kr5xp\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.275238 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="804820f7-3638-4b4f-ab22-314e4012647f" path="/var/lib/kubelet/pods/804820f7-3638-4b4f-ab22-314e4012647f/volumes" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.521992 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"eb267cf2-7c07-4014-b06c-d34b5517f6ef","Type":"ContainerStarted","Data":"e5c97a307dcb34bcdad5a6c8db4563a4a02325b0345f80e6eb1eca8bb8f683a6"} Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.524328 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-hd95z" event={"ID":"456f8361-9b09-40dd-943c-90191091aef2","Type":"ContainerDied","Data":"72deee44925f0925eefe4e17f16588c887a4c00c66db7c3e3bcca9ed8538da56"} Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.524372 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-hd95z" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.524388 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72deee44925f0925eefe4e17f16588c887a4c00c66db7c3e3bcca9ed8538da56" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.525645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" event={"ID":"c79c74cf-8574-42e6-954b-42455083b729","Type":"ContainerDied","Data":"f7a2e86afdf091be0685e2e2e3b0bfbc8da0b799e5aeeaec28b233cc46ffca4c"} Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.525685 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c" Dec 02 20:39:07 crc kubenswrapper[4796]: I1202 20:39:07.525686 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7a2e86afdf091be0685e2e2e3b0bfbc8da0b799e5aeeaec28b233cc46ffca4c" Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.282823 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.541023 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"eb267cf2-7c07-4014-b06c-d34b5517f6ef","Type":"ContainerStarted","Data":"c79c2437589085545392c859ea7ed1cb123d3a14a3fc0328d321a1772532ebda"} Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.541132 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"eb267cf2-7c07-4014-b06c-d34b5517f6ef","Type":"ContainerStarted","Data":"9e1578ae6ebfd9c9ca24e454feee5ed591834d017686b2616813fdf39aa57487"} Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.985922 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-sync-pdjpd"] Dec 02 20:39:08 crc kubenswrapper[4796]: E1202 20:39:08.986445 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456f8361-9b09-40dd-943c-90191091aef2" containerName="mariadb-database-create" Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.986459 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="456f8361-9b09-40dd-943c-90191091aef2" containerName="mariadb-database-create" Dec 02 20:39:08 crc kubenswrapper[4796]: E1202 20:39:08.986480 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79c74cf-8574-42e6-954b-42455083b729" containerName="mariadb-account-create-update" Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.986487 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79c74cf-8574-42e6-954b-42455083b729" containerName="mariadb-account-create-update" Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.986652 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79c74cf-8574-42e6-954b-42455083b729" containerName="mariadb-account-create-update" Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.986669 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="456f8361-9b09-40dd-943c-90191091aef2" containerName="mariadb-database-create" Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.987174 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.989871 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.990050 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.990065 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-8f8t2" Dec 02 20:39:08 crc kubenswrapper[4796]: I1202 20:39:08.999632 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-pdjpd"] Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.089547 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-scripts\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.089702 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-db-sync-config-data\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.089736 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-config-data\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.089771 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnvrx\" (UniqueName: \"kubernetes.io/projected/febb424a-fe1a-487d-9c0b-2a7450991431-kube-api-access-xnvrx\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.089851 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-combined-ca-bundle\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.089953 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/febb424a-fe1a-487d-9c0b-2a7450991431-etc-machine-id\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.191212 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-scripts\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.191508 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-db-sync-config-data\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.191588 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-config-data\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.191667 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnvrx\" (UniqueName: \"kubernetes.io/projected/febb424a-fe1a-487d-9c0b-2a7450991431-kube-api-access-xnvrx\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.191786 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-combined-ca-bundle\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.191890 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/febb424a-fe1a-487d-9c0b-2a7450991431-etc-machine-id\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.191992 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/febb424a-fe1a-487d-9c0b-2a7450991431-etc-machine-id\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.197049 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-db-sync-config-data\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.200165 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-config-data\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.202747 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-scripts\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.204773 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-combined-ca-bundle\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.208957 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnvrx\" (UniqueName: \"kubernetes.io/projected/febb424a-fe1a-487d-9c0b-2a7450991431-kube-api-access-xnvrx\") pod \"cinder-db-sync-pdjpd\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.302773 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.550505 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.560805 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"eb267cf2-7c07-4014-b06c-d34b5517f6ef","Type":"ContainerStarted","Data":"b6c6d0e03d805566a0c0eccc3cd3697d4657d5892d1ce0183136eb45ca5f97bc"} Dec 02 20:39:09 crc kubenswrapper[4796]: I1202 20:39:09.803672 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-pdjpd"] Dec 02 20:39:09 crc kubenswrapper[4796]: W1202 20:39:09.807107 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfebb424a_fe1a_487d_9c0b_2a7450991431.slice/crio-69265a1afd52dae1bacc87689e90e50b13f9a4b9ac06e51d190ab54a290fd3cd WatchSource:0}: Error finding container 69265a1afd52dae1bacc87689e90e50b13f9a4b9ac06e51d190ab54a290fd3cd: Status 404 returned error can't find the container with id 69265a1afd52dae1bacc87689e90e50b13f9a4b9ac06e51d190ab54a290fd3cd Dec 02 20:39:10 crc kubenswrapper[4796]: I1202 20:39:10.569836 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-pdjpd" event={"ID":"febb424a-fe1a-487d-9c0b-2a7450991431","Type":"ContainerStarted","Data":"69265a1afd52dae1bacc87689e90e50b13f9a4b9ac06e51d190ab54a290fd3cd"} Dec 02 20:39:10 crc kubenswrapper[4796]: I1202 20:39:10.572800 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"eb267cf2-7c07-4014-b06c-d34b5517f6ef","Type":"ContainerStarted","Data":"d7edf9597ac38b23c14d700eaffd57b76e17f7c411925bf6126fdf1bf558f509"} Dec 02 20:39:10 crc kubenswrapper[4796]: I1202 20:39:10.572974 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:10 crc kubenswrapper[4796]: I1202 20:39:10.597269 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.158738506 podStartE2EDuration="5.597231831s" podCreationTimestamp="2025-12-02 20:39:05 +0000 UTC" firstStartedPulling="2025-12-02 20:39:06.72340235 +0000 UTC m=+1629.726777894" lastFinishedPulling="2025-12-02 20:39:10.161895675 +0000 UTC m=+1633.165271219" observedRunningTime="2025-12-02 20:39:10.593083972 +0000 UTC m=+1633.596459506" watchObservedRunningTime="2025-12-02 20:39:10.597231831 +0000 UTC m=+1633.600607365" Dec 02 20:39:10 crc kubenswrapper[4796]: I1202 20:39:10.729228 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:11 crc kubenswrapper[4796]: I1202 20:39:11.939397 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:12 crc kubenswrapper[4796]: I1202 20:39:12.265576 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:39:12 crc kubenswrapper[4796]: E1202 20:39:12.266036 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:39:13 crc kubenswrapper[4796]: I1202 20:39:13.150157 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:14 crc kubenswrapper[4796]: I1202 20:39:14.357940 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:15 crc kubenswrapper[4796]: I1202 20:39:15.553630 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:16 crc kubenswrapper[4796]: I1202 20:39:16.770237 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:18 crc kubenswrapper[4796]: I1202 20:39:18.046624 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:18 crc kubenswrapper[4796]: I1202 20:39:18.752971 4796 scope.go:117] "RemoveContainer" containerID="a15d4448cfb7cb1d1327136fb2ac5ebe9f47ab6220d38091735a68069057e09e" Dec 02 20:39:19 crc kubenswrapper[4796]: I1202 20:39:19.271716 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:20 crc kubenswrapper[4796]: I1202 20:39:20.479736 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:21 crc kubenswrapper[4796]: I1202 20:39:21.734305 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:23 crc kubenswrapper[4796]: I1202 20:39:23.000932 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:24 crc kubenswrapper[4796]: I1202 20:39:24.264523 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:25 crc kubenswrapper[4796]: I1202 20:39:25.513985 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:26 crc kubenswrapper[4796]: I1202 20:39:26.747195 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:26 crc kubenswrapper[4796]: I1202 20:39:26.873051 4796 scope.go:117] "RemoveContainer" containerID="fc851b311078a7af7e708c1ac894e849e46db3980e13fdce30e068515f964f79" Dec 02 20:39:26 crc kubenswrapper[4796]: E1202 20:39:26.906443 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 02 20:39:26 crc kubenswrapper[4796]: E1202 20:39:26.906618 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xnvrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-pdjpd_watcher-kuttl-default(febb424a-fe1a-487d-9c0b-2a7450991431): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 20:39:26 crc kubenswrapper[4796]: E1202 20:39:26.908273 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/cinder-db-sync-pdjpd" podUID="febb424a-fe1a-487d-9c0b-2a7450991431" Dec 02 20:39:27 crc kubenswrapper[4796]: I1202 20:39:27.270198 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:39:27 crc kubenswrapper[4796]: E1202 20:39:27.270471 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:39:27 crc kubenswrapper[4796]: E1202 20:39:27.778764 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="watcher-kuttl-default/cinder-db-sync-pdjpd" podUID="febb424a-fe1a-487d-9c0b-2a7450991431" Dec 02 20:39:27 crc kubenswrapper[4796]: I1202 20:39:27.974766 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:29 crc kubenswrapper[4796]: I1202 20:39:29.244383 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:30 crc kubenswrapper[4796]: I1202 20:39:30.512930 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:31 crc kubenswrapper[4796]: I1202 20:39:31.754184 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:32 crc kubenswrapper[4796]: I1202 20:39:32.992777 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:34 crc kubenswrapper[4796]: I1202 20:39:34.253059 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:35 crc kubenswrapper[4796]: I1202 20:39:35.441556 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:36 crc kubenswrapper[4796]: I1202 20:39:36.207167 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:36 crc kubenswrapper[4796]: I1202 20:39:36.736293 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:37 crc kubenswrapper[4796]: I1202 20:39:37.960773 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:39 crc kubenswrapper[4796]: I1202 20:39:39.216169 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:40 crc kubenswrapper[4796]: I1202 20:39:40.491020 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:41 crc kubenswrapper[4796]: I1202 20:39:41.265454 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:39:41 crc kubenswrapper[4796]: E1202 20:39:41.266453 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:39:41 crc kubenswrapper[4796]: I1202 20:39:41.746208 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:41 crc kubenswrapper[4796]: I1202 20:39:41.937735 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-pdjpd" event={"ID":"febb424a-fe1a-487d-9c0b-2a7450991431","Type":"ContainerStarted","Data":"892e4a8ab47a0b3db0f74d800b4a8611cc56b7618b6f40d6a78f208bb375e871"} Dec 02 20:39:41 crc kubenswrapper[4796]: I1202 20:39:41.968863 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-db-sync-pdjpd" podStartSLOduration=3.319900235 podStartE2EDuration="33.968837019s" podCreationTimestamp="2025-12-02 20:39:08 +0000 UTC" firstStartedPulling="2025-12-02 20:39:09.808852568 +0000 UTC m=+1632.812228102" lastFinishedPulling="2025-12-02 20:39:40.457789352 +0000 UTC m=+1663.461164886" observedRunningTime="2025-12-02 20:39:41.957785053 +0000 UTC m=+1664.961160637" watchObservedRunningTime="2025-12-02 20:39:41.968837019 +0000 UTC m=+1664.972212563" Dec 02 20:39:43 crc kubenswrapper[4796]: I1202 20:39:43.041152 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:44 crc kubenswrapper[4796]: I1202 20:39:44.287771 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:45 crc kubenswrapper[4796]: I1202 20:39:45.509965 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:45 crc kubenswrapper[4796]: I1202 20:39:45.981580 4796 generic.go:334] "Generic (PLEG): container finished" podID="febb424a-fe1a-487d-9c0b-2a7450991431" containerID="892e4a8ab47a0b3db0f74d800b4a8611cc56b7618b6f40d6a78f208bb375e871" exitCode=0 Dec 02 20:39:45 crc kubenswrapper[4796]: I1202 20:39:45.981635 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-pdjpd" event={"ID":"febb424a-fe1a-487d-9c0b-2a7450991431","Type":"ContainerDied","Data":"892e4a8ab47a0b3db0f74d800b4a8611cc56b7618b6f40d6a78f208bb375e871"} Dec 02 20:39:46 crc kubenswrapper[4796]: I1202 20:39:46.714065 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.356562 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.472214 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnvrx\" (UniqueName: \"kubernetes.io/projected/febb424a-fe1a-487d-9c0b-2a7450991431-kube-api-access-xnvrx\") pod \"febb424a-fe1a-487d-9c0b-2a7450991431\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.472624 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/febb424a-fe1a-487d-9c0b-2a7450991431-etc-machine-id\") pod \"febb424a-fe1a-487d-9c0b-2a7450991431\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.472669 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-config-data\") pod \"febb424a-fe1a-487d-9c0b-2a7450991431\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.472712 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-scripts\") pod \"febb424a-fe1a-487d-9c0b-2a7450991431\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.472747 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-combined-ca-bundle\") pod \"febb424a-fe1a-487d-9c0b-2a7450991431\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.472820 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-db-sync-config-data\") pod \"febb424a-fe1a-487d-9c0b-2a7450991431\" (UID: \"febb424a-fe1a-487d-9c0b-2a7450991431\") " Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.472878 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/febb424a-fe1a-487d-9c0b-2a7450991431-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "febb424a-fe1a-487d-9c0b-2a7450991431" (UID: "febb424a-fe1a-487d-9c0b-2a7450991431"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.473264 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/febb424a-fe1a-487d-9c0b-2a7450991431-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.478864 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febb424a-fe1a-487d-9c0b-2a7450991431-kube-api-access-xnvrx" (OuterVolumeSpecName: "kube-api-access-xnvrx") pod "febb424a-fe1a-487d-9c0b-2a7450991431" (UID: "febb424a-fe1a-487d-9c0b-2a7450991431"). InnerVolumeSpecName "kube-api-access-xnvrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.481175 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-scripts" (OuterVolumeSpecName: "scripts") pod "febb424a-fe1a-487d-9c0b-2a7450991431" (UID: "febb424a-fe1a-487d-9c0b-2a7450991431"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.486415 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "febb424a-fe1a-487d-9c0b-2a7450991431" (UID: "febb424a-fe1a-487d-9c0b-2a7450991431"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.508806 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "febb424a-fe1a-487d-9c0b-2a7450991431" (UID: "febb424a-fe1a-487d-9c0b-2a7450991431"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.522387 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-config-data" (OuterVolumeSpecName: "config-data") pod "febb424a-fe1a-487d-9c0b-2a7450991431" (UID: "febb424a-fe1a-487d-9c0b-2a7450991431"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.575191 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.575223 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.575235 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.575244 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnvrx\" (UniqueName: \"kubernetes.io/projected/febb424a-fe1a-487d-9c0b-2a7450991431-kube-api-access-xnvrx\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.575270 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb424a-fe1a-487d-9c0b-2a7450991431-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:47 crc kubenswrapper[4796]: I1202 20:39:47.945621 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.005221 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-pdjpd" event={"ID":"febb424a-fe1a-487d-9c0b-2a7450991431","Type":"ContainerDied","Data":"69265a1afd52dae1bacc87689e90e50b13f9a4b9ac06e51d190ab54a290fd3cd"} Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.005309 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69265a1afd52dae1bacc87689e90e50b13f9a4b9ac06e51d190ab54a290fd3cd" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.005309 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-pdjpd" Dec 02 20:39:48 crc kubenswrapper[4796]: E1202 20:39:48.295009 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfebb424a_fe1a_487d_9c0b_2a7450991431.slice\": RecentStats: unable to find data in memory cache]" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.361418 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Dec 02 20:39:48 crc kubenswrapper[4796]: E1202 20:39:48.362036 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febb424a-fe1a-487d-9c0b-2a7450991431" containerName="cinder-db-sync" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.362058 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="febb424a-fe1a-487d-9c0b-2a7450991431" containerName="cinder-db-sync" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.362307 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="febb424a-fe1a-487d-9c0b-2a7450991431" containerName="cinder-db-sync" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.363645 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.366883 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-8f8t2" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.367456 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.367898 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.367790 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.376173 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.413483 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.418064 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.422206 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.436754 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.497898 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.497951 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwcj8\" (UniqueName: \"kubernetes.io/projected/c58807c0-69fa-43dd-9ce9-e695e624ac61-kube-api-access-rwcj8\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.497987 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498038 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blz56\" (UniqueName: \"kubernetes.io/projected/73f2d7db-1725-4ebd-9cda-d3eba7d26842-kube-api-access-blz56\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498061 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-sys\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498085 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498112 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-lib-modules\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498133 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-scripts\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498155 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498191 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73f2d7db-1725-4ebd-9cda-d3eba7d26842-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498230 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498320 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-run\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498352 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498375 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498415 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498442 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-config-data\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498461 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498482 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-config-data\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498505 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498530 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498579 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-scripts\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.498601 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-dev\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600036 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73f2d7db-1725-4ebd-9cda-d3eba7d26842-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600086 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600111 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-run\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600143 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600161 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600192 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600213 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600207 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-run\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600230 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-config-data\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600319 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-config-data\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600340 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600371 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600482 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-scripts\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600506 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-dev\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600575 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600592 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwcj8\" (UniqueName: \"kubernetes.io/projected/c58807c0-69fa-43dd-9ce9-e695e624ac61-kube-api-access-rwcj8\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600625 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600652 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600681 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blz56\" (UniqueName: \"kubernetes.io/projected/73f2d7db-1725-4ebd-9cda-d3eba7d26842-kube-api-access-blz56\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600709 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-sys\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600721 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-dev\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600748 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600790 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600868 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-lib-modules\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600913 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-scripts\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600960 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.601306 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.601383 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600671 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.601596 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-sys\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.601701 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.602133 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-lib-modules\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.602195 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.600153 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73f2d7db-1725-4ebd-9cda-d3eba7d26842-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.612964 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.612999 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-config-data\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.614837 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-scripts\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.614927 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-config-data\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.614996 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.615565 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.615734 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-scripts\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.616968 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.622435 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.624368 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.632313 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwcj8\" (UniqueName: \"kubernetes.io/projected/c58807c0-69fa-43dd-9ce9-e695e624ac61-kube-api-access-rwcj8\") pod \"cinder-backup-0\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.644683 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.646608 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.655890 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.661941 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-api-config-data" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.672839 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blz56\" (UniqueName: \"kubernetes.io/projected/73f2d7db-1725-4ebd-9cda-d3eba7d26842-kube-api-access-blz56\") pod \"cinder-scheduler-0\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.696471 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.739517 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.805103 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znq6m\" (UniqueName: \"kubernetes.io/projected/b1277926-e4bf-43ff-94f3-69dae1d8f914-kube-api-access-znq6m\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.805210 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-scripts\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.805240 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.805273 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-config-data\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.805300 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1277926-e4bf-43ff-94f3-69dae1d8f914-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.805329 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-config-data-custom\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.805361 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1277926-e4bf-43ff-94f3-69dae1d8f914-logs\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.805384 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.910303 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1277926-e4bf-43ff-94f3-69dae1d8f914-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.910372 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-config-data-custom\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.910417 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1277926-e4bf-43ff-94f3-69dae1d8f914-logs\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.910446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.910500 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znq6m\" (UniqueName: \"kubernetes.io/projected/b1277926-e4bf-43ff-94f3-69dae1d8f914-kube-api-access-znq6m\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.910554 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-scripts\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.910583 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.910610 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-config-data\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.916213 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1277926-e4bf-43ff-94f3-69dae1d8f914-logs\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.916535 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1277926-e4bf-43ff-94f3-69dae1d8f914-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.923790 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-config-data-custom\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.926003 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-scripts\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.926040 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.926708 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.930045 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-config-data\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:48 crc kubenswrapper[4796]: I1202 20:39:48.945682 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znq6m\" (UniqueName: \"kubernetes.io/projected/b1277926-e4bf-43ff-94f3-69dae1d8f914-kube-api-access-znq6m\") pod \"cinder-api-0\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:49 crc kubenswrapper[4796]: I1202 20:39:49.118835 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:49 crc kubenswrapper[4796]: I1202 20:39:49.170345 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:49 crc kubenswrapper[4796]: I1202 20:39:49.376474 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Dec 02 20:39:49 crc kubenswrapper[4796]: I1202 20:39:49.454794 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Dec 02 20:39:49 crc kubenswrapper[4796]: W1202 20:39:49.727410 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1277926_e4bf_43ff_94f3_69dae1d8f914.slice/crio-e37727dee22fb778c813d99646af1ff2bddbe2b0e6f5a6337f140d46dbe4eeae WatchSource:0}: Error finding container e37727dee22fb778c813d99646af1ff2bddbe2b0e6f5a6337f140d46dbe4eeae: Status 404 returned error can't find the container with id e37727dee22fb778c813d99646af1ff2bddbe2b0e6f5a6337f140d46dbe4eeae Dec 02 20:39:49 crc kubenswrapper[4796]: I1202 20:39:49.741390 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Dec 02 20:39:50 crc kubenswrapper[4796]: I1202 20:39:50.056519 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"b1277926-e4bf-43ff-94f3-69dae1d8f914","Type":"ContainerStarted","Data":"e37727dee22fb778c813d99646af1ff2bddbe2b0e6f5a6337f140d46dbe4eeae"} Dec 02 20:39:50 crc kubenswrapper[4796]: I1202 20:39:50.058833 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"c58807c0-69fa-43dd-9ce9-e695e624ac61","Type":"ContainerStarted","Data":"af9a8d31eed3dd9e1db0b1cd34fafe2d3e37ab10bde977feb6a8a1ea581b310f"} Dec 02 20:39:50 crc kubenswrapper[4796]: I1202 20:39:50.060048 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"73f2d7db-1725-4ebd-9cda-d3eba7d26842","Type":"ContainerStarted","Data":"11d2792a8eb82de82f1307463f25d75c7c71de45ec2717617f1c0ae4ceb246cb"} Dec 02 20:39:50 crc kubenswrapper[4796]: I1202 20:39:50.391947 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:51 crc kubenswrapper[4796]: I1202 20:39:51.086587 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"b1277926-e4bf-43ff-94f3-69dae1d8f914","Type":"ContainerStarted","Data":"5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3"} Dec 02 20:39:51 crc kubenswrapper[4796]: I1202 20:39:51.089012 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"c58807c0-69fa-43dd-9ce9-e695e624ac61","Type":"ContainerStarted","Data":"ac0089c2b490b7feae34a1795b8a7fa15ec45f3f2ee8b34d17e4030fc3cab0e8"} Dec 02 20:39:51 crc kubenswrapper[4796]: I1202 20:39:51.644360 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:51 crc kubenswrapper[4796]: I1202 20:39:51.843006 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Dec 02 20:39:52 crc kubenswrapper[4796]: I1202 20:39:52.099463 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"b1277926-e4bf-43ff-94f3-69dae1d8f914","Type":"ContainerStarted","Data":"f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b"} Dec 02 20:39:52 crc kubenswrapper[4796]: I1202 20:39:52.100344 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:52 crc kubenswrapper[4796]: I1202 20:39:52.102097 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"c58807c0-69fa-43dd-9ce9-e695e624ac61","Type":"ContainerStarted","Data":"52ec871b5abe7425a84adcf51d297c66cae98a3fee6102f4a7771787f26607e9"} Dec 02 20:39:52 crc kubenswrapper[4796]: I1202 20:39:52.105869 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"73f2d7db-1725-4ebd-9cda-d3eba7d26842","Type":"ContainerStarted","Data":"4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6"} Dec 02 20:39:52 crc kubenswrapper[4796]: I1202 20:39:52.105897 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"73f2d7db-1725-4ebd-9cda-d3eba7d26842","Type":"ContainerStarted","Data":"f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71"} Dec 02 20:39:52 crc kubenswrapper[4796]: I1202 20:39:52.121728 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-api-0" podStartSLOduration=4.121711196 podStartE2EDuration="4.121711196s" podCreationTimestamp="2025-12-02 20:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:39:52.116210193 +0000 UTC m=+1675.119585727" watchObservedRunningTime="2025-12-02 20:39:52.121711196 +0000 UTC m=+1675.125086730" Dec 02 20:39:52 crc kubenswrapper[4796]: I1202 20:39:52.145480 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=3.162496955 podStartE2EDuration="4.145459667s" podCreationTimestamp="2025-12-02 20:39:48 +0000 UTC" firstStartedPulling="2025-12-02 20:39:49.401694433 +0000 UTC m=+1672.405069967" lastFinishedPulling="2025-12-02 20:39:50.384657145 +0000 UTC m=+1673.388032679" observedRunningTime="2025-12-02 20:39:52.139216497 +0000 UTC m=+1675.142592021" watchObservedRunningTime="2025-12-02 20:39:52.145459667 +0000 UTC m=+1675.148835201" Dec 02 20:39:52 crc kubenswrapper[4796]: I1202 20:39:52.168772 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=3.242856847 podStartE2EDuration="4.168755507s" podCreationTimestamp="2025-12-02 20:39:48 +0000 UTC" firstStartedPulling="2025-12-02 20:39:49.459497003 +0000 UTC m=+1672.462872537" lastFinishedPulling="2025-12-02 20:39:50.385395663 +0000 UTC m=+1673.388771197" observedRunningTime="2025-12-02 20:39:52.158920721 +0000 UTC m=+1675.162296255" watchObservedRunningTime="2025-12-02 20:39:52.168755507 +0000 UTC m=+1675.172131041" Dec 02 20:39:52 crc kubenswrapper[4796]: I1202 20:39:52.880174 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:53 crc kubenswrapper[4796]: I1202 20:39:53.115005 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="b1277926-e4bf-43ff-94f3-69dae1d8f914" containerName="cinder-api-log" containerID="cri-o://5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3" gracePeriod=30 Dec 02 20:39:53 crc kubenswrapper[4796]: I1202 20:39:53.115058 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="b1277926-e4bf-43ff-94f3-69dae1d8f914" containerName="cinder-api" containerID="cri-o://f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b" gracePeriod=30 Dec 02 20:39:53 crc kubenswrapper[4796]: I1202 20:39:53.697769 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:53 crc kubenswrapper[4796]: I1202 20:39:53.739860 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:53 crc kubenswrapper[4796]: I1202 20:39:53.946473 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.097117 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.128152 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1277926-e4bf-43ff-94f3-69dae1d8f914-etc-machine-id\") pod \"b1277926-e4bf-43ff-94f3-69dae1d8f914\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.128249 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znq6m\" (UniqueName: \"kubernetes.io/projected/b1277926-e4bf-43ff-94f3-69dae1d8f914-kube-api-access-znq6m\") pod \"b1277926-e4bf-43ff-94f3-69dae1d8f914\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.128290 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1277926-e4bf-43ff-94f3-69dae1d8f914-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b1277926-e4bf-43ff-94f3-69dae1d8f914" (UID: "b1277926-e4bf-43ff-94f3-69dae1d8f914"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.128321 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-scripts\") pod \"b1277926-e4bf-43ff-94f3-69dae1d8f914\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.128375 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-config-data-custom\") pod \"b1277926-e4bf-43ff-94f3-69dae1d8f914\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.128506 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-combined-ca-bundle\") pod \"b1277926-e4bf-43ff-94f3-69dae1d8f914\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.128634 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-cert-memcached-mtls\") pod \"b1277926-e4bf-43ff-94f3-69dae1d8f914\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.128698 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-config-data\") pod \"b1277926-e4bf-43ff-94f3-69dae1d8f914\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.128719 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1277926-e4bf-43ff-94f3-69dae1d8f914-logs\") pod \"b1277926-e4bf-43ff-94f3-69dae1d8f914\" (UID: \"b1277926-e4bf-43ff-94f3-69dae1d8f914\") " Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.129169 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1277926-e4bf-43ff-94f3-69dae1d8f914-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.129678 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1277926-e4bf-43ff-94f3-69dae1d8f914-logs" (OuterVolumeSpecName: "logs") pod "b1277926-e4bf-43ff-94f3-69dae1d8f914" (UID: "b1277926-e4bf-43ff-94f3-69dae1d8f914"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.132989 4796 generic.go:334] "Generic (PLEG): container finished" podID="b1277926-e4bf-43ff-94f3-69dae1d8f914" containerID="f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b" exitCode=0 Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.133022 4796 generic.go:334] "Generic (PLEG): container finished" podID="b1277926-e4bf-43ff-94f3-69dae1d8f914" containerID="5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3" exitCode=143 Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.133200 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"b1277926-e4bf-43ff-94f3-69dae1d8f914","Type":"ContainerDied","Data":"f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b"} Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.133278 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"b1277926-e4bf-43ff-94f3-69dae1d8f914","Type":"ContainerDied","Data":"5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3"} Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.133290 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"b1277926-e4bf-43ff-94f3-69dae1d8f914","Type":"ContainerDied","Data":"e37727dee22fb778c813d99646af1ff2bddbe2b0e6f5a6337f140d46dbe4eeae"} Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.133333 4796 scope.go:117] "RemoveContainer" containerID="f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.133486 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.135422 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1277926-e4bf-43ff-94f3-69dae1d8f914-kube-api-access-znq6m" (OuterVolumeSpecName: "kube-api-access-znq6m") pod "b1277926-e4bf-43ff-94f3-69dae1d8f914" (UID: "b1277926-e4bf-43ff-94f3-69dae1d8f914"). InnerVolumeSpecName "kube-api-access-znq6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.136414 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-scripts" (OuterVolumeSpecName: "scripts") pod "b1277926-e4bf-43ff-94f3-69dae1d8f914" (UID: "b1277926-e4bf-43ff-94f3-69dae1d8f914"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.147943 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b1277926-e4bf-43ff-94f3-69dae1d8f914" (UID: "b1277926-e4bf-43ff-94f3-69dae1d8f914"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.176135 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1277926-e4bf-43ff-94f3-69dae1d8f914" (UID: "b1277926-e4bf-43ff-94f3-69dae1d8f914"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.205969 4796 scope.go:117] "RemoveContainer" containerID="5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.208634 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-config-data" (OuterVolumeSpecName: "config-data") pod "b1277926-e4bf-43ff-94f3-69dae1d8f914" (UID: "b1277926-e4bf-43ff-94f3-69dae1d8f914"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.221735 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b1277926-e4bf-43ff-94f3-69dae1d8f914" (UID: "b1277926-e4bf-43ff-94f3-69dae1d8f914"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.239038 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.239068 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.239078 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.239087 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1277926-e4bf-43ff-94f3-69dae1d8f914-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.239096 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znq6m\" (UniqueName: \"kubernetes.io/projected/b1277926-e4bf-43ff-94f3-69dae1d8f914-kube-api-access-znq6m\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.239105 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.239114 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1277926-e4bf-43ff-94f3-69dae1d8f914-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.240026 4796 scope.go:117] "RemoveContainer" containerID="f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b" Dec 02 20:39:54 crc kubenswrapper[4796]: E1202 20:39:54.241701 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b\": container with ID starting with f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b not found: ID does not exist" containerID="f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.241765 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b"} err="failed to get container status \"f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b\": rpc error: code = NotFound desc = could not find container \"f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b\": container with ID starting with f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b not found: ID does not exist" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.241799 4796 scope.go:117] "RemoveContainer" containerID="5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3" Dec 02 20:39:54 crc kubenswrapper[4796]: E1202 20:39:54.242802 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3\": container with ID starting with 5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3 not found: ID does not exist" containerID="5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.242842 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3"} err="failed to get container status \"5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3\": rpc error: code = NotFound desc = could not find container \"5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3\": container with ID starting with 5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3 not found: ID does not exist" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.242868 4796 scope.go:117] "RemoveContainer" containerID="f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.246216 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b"} err="failed to get container status \"f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b\": rpc error: code = NotFound desc = could not find container \"f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b\": container with ID starting with f1625a9f97d422c571e7776d649cf7db66d8b4be9d16277de520c6d55e5b8f9b not found: ID does not exist" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.246246 4796 scope.go:117] "RemoveContainer" containerID="5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.246753 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3"} err="failed to get container status \"5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3\": rpc error: code = NotFound desc = could not find container \"5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3\": container with ID starting with 5263316e7534a568487f73659ee39445f710daab4323b45e7dd8e9c97dda7de3 not found: ID does not exist" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.267647 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:39:54 crc kubenswrapper[4796]: E1202 20:39:54.267925 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.343833 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.344057 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="de9cf93c-00d2-489c-9ea7-e006e692e9be" containerName="watcher-decision-engine" containerID="cri-o://72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba" gracePeriod=30 Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.467782 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.474407 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.528476 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Dec 02 20:39:54 crc kubenswrapper[4796]: E1202 20:39:54.529886 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1277926-e4bf-43ff-94f3-69dae1d8f914" containerName="cinder-api-log" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.529910 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1277926-e4bf-43ff-94f3-69dae1d8f914" containerName="cinder-api-log" Dec 02 20:39:54 crc kubenswrapper[4796]: E1202 20:39:54.530004 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1277926-e4bf-43ff-94f3-69dae1d8f914" containerName="cinder-api" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.530015 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1277926-e4bf-43ff-94f3-69dae1d8f914" containerName="cinder-api" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.530552 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1277926-e4bf-43ff-94f3-69dae1d8f914" containerName="cinder-api" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.530597 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1277926-e4bf-43ff-94f3-69dae1d8f914" containerName="cinder-api-log" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.533012 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.538845 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-cinder-internal-svc" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.538907 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-cinder-public-svc" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.543208 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-api-config-data" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.544854 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.646134 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mdm5\" (UniqueName: \"kubernetes.io/projected/93ee5017-b7fa-4c09-add3-76d393bb0e2b-kube-api-access-2mdm5\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.646200 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5017-b7fa-4c09-add3-76d393bb0e2b-logs\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.646223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.646309 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-config-data-custom\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.646336 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-config-data\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.646360 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.646530 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.646605 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.646693 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93ee5017-b7fa-4c09-add3-76d393bb0e2b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.646820 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-scripts\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.748657 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.748714 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.748740 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.748769 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93ee5017-b7fa-4c09-add3-76d393bb0e2b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.748799 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-scripts\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.748826 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mdm5\" (UniqueName: \"kubernetes.io/projected/93ee5017-b7fa-4c09-add3-76d393bb0e2b-kube-api-access-2mdm5\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.748864 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5017-b7fa-4c09-add3-76d393bb0e2b-logs\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.748880 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.748921 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-config-data-custom\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.748943 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-config-data\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.750771 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93ee5017-b7fa-4c09-add3-76d393bb0e2b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.750835 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5017-b7fa-4c09-add3-76d393bb0e2b-logs\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.754204 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-scripts\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.755148 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-config-data\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.755558 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.755651 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-config-data-custom\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.755770 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.756028 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.758953 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.771648 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mdm5\" (UniqueName: \"kubernetes.io/projected/93ee5017-b7fa-4c09-add3-76d393bb0e2b-kube-api-access-2mdm5\") pod \"cinder-api-0\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.875481 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.977083 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.977513 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="ceilometer-central-agent" containerID="cri-o://9e1578ae6ebfd9c9ca24e454feee5ed591834d017686b2616813fdf39aa57487" gracePeriod=30 Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.977726 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="proxy-httpd" containerID="cri-o://d7edf9597ac38b23c14d700eaffd57b76e17f7c411925bf6126fdf1bf558f509" gracePeriod=30 Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.977777 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="sg-core" containerID="cri-o://b6c6d0e03d805566a0c0eccc3cd3697d4657d5892d1ce0183136eb45ca5f97bc" gracePeriod=30 Dec 02 20:39:54 crc kubenswrapper[4796]: I1202 20:39:54.977817 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="ceilometer-notification-agent" containerID="cri-o://c79c2437589085545392c859ea7ed1cb123d3a14a3fc0328d321a1772532ebda" gracePeriod=30 Dec 02 20:39:55 crc kubenswrapper[4796]: I1202 20:39:55.154739 4796 generic.go:334] "Generic (PLEG): container finished" podID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerID="b6c6d0e03d805566a0c0eccc3cd3697d4657d5892d1ce0183136eb45ca5f97bc" exitCode=2 Dec 02 20:39:55 crc kubenswrapper[4796]: I1202 20:39:55.155101 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"eb267cf2-7c07-4014-b06c-d34b5517f6ef","Type":"ContainerDied","Data":"b6c6d0e03d805566a0c0eccc3cd3697d4657d5892d1ce0183136eb45ca5f97bc"} Dec 02 20:39:55 crc kubenswrapper[4796]: I1202 20:39:55.273662 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1277926-e4bf-43ff-94f3-69dae1d8f914" path="/var/lib/kubelet/pods/b1277926-e4bf-43ff-94f3-69dae1d8f914/volumes" Dec 02 20:39:55 crc kubenswrapper[4796]: I1202 20:39:55.339648 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:55 crc kubenswrapper[4796]: I1202 20:39:55.459991 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Dec 02 20:39:55 crc kubenswrapper[4796]: W1202 20:39:55.466151 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93ee5017_b7fa_4c09_add3_76d393bb0e2b.slice/crio-486082df7d79379e3fa805e1ff9a265376c8c6989f8eeb1b591874d890802820 WatchSource:0}: Error finding container 486082df7d79379e3fa805e1ff9a265376c8c6989f8eeb1b591874d890802820: Status 404 returned error can't find the container with id 486082df7d79379e3fa805e1ff9a265376c8c6989f8eeb1b591874d890802820 Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.171093 4796 generic.go:334] "Generic (PLEG): container finished" podID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerID="d7edf9597ac38b23c14d700eaffd57b76e17f7c411925bf6126fdf1bf558f509" exitCode=0 Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.171644 4796 generic.go:334] "Generic (PLEG): container finished" podID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerID="c79c2437589085545392c859ea7ed1cb123d3a14a3fc0328d321a1772532ebda" exitCode=0 Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.171696 4796 generic.go:334] "Generic (PLEG): container finished" podID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerID="9e1578ae6ebfd9c9ca24e454feee5ed591834d017686b2616813fdf39aa57487" exitCode=0 Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.171623 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"eb267cf2-7c07-4014-b06c-d34b5517f6ef","Type":"ContainerDied","Data":"d7edf9597ac38b23c14d700eaffd57b76e17f7c411925bf6126fdf1bf558f509"} Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.172116 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"eb267cf2-7c07-4014-b06c-d34b5517f6ef","Type":"ContainerDied","Data":"c79c2437589085545392c859ea7ed1cb123d3a14a3fc0328d321a1772532ebda"} Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.172132 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"eb267cf2-7c07-4014-b06c-d34b5517f6ef","Type":"ContainerDied","Data":"9e1578ae6ebfd9c9ca24e454feee5ed591834d017686b2616813fdf39aa57487"} Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.173160 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"93ee5017-b7fa-4c09-add3-76d393bb0e2b","Type":"ContainerStarted","Data":"10cdff450040cf2e920bff80607829f00c868d8cbff434179b98562a59be0731"} Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.173184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"93ee5017-b7fa-4c09-add3-76d393bb0e2b","Type":"ContainerStarted","Data":"486082df7d79379e3fa805e1ff9a265376c8c6989f8eeb1b591874d890802820"} Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.201606 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.382082 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb267cf2-7c07-4014-b06c-d34b5517f6ef-run-httpd\") pod \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.382535 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-config-data\") pod \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.382592 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-scripts\") pod \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.382613 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb267cf2-7c07-4014-b06c-d34b5517f6ef-log-httpd\") pod \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.382666 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-sg-core-conf-yaml\") pod \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.382747 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlqsk\" (UniqueName: \"kubernetes.io/projected/eb267cf2-7c07-4014-b06c-d34b5517f6ef-kube-api-access-vlqsk\") pod \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.382789 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-ceilometer-tls-certs\") pod \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.382825 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-combined-ca-bundle\") pod \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\" (UID: \"eb267cf2-7c07-4014-b06c-d34b5517f6ef\") " Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.383177 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb267cf2-7c07-4014-b06c-d34b5517f6ef-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eb267cf2-7c07-4014-b06c-d34b5517f6ef" (UID: "eb267cf2-7c07-4014-b06c-d34b5517f6ef"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.383302 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb267cf2-7c07-4014-b06c-d34b5517f6ef-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.383530 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb267cf2-7c07-4014-b06c-d34b5517f6ef-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eb267cf2-7c07-4014-b06c-d34b5517f6ef" (UID: "eb267cf2-7c07-4014-b06c-d34b5517f6ef"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.386707 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-scripts" (OuterVolumeSpecName: "scripts") pod "eb267cf2-7c07-4014-b06c-d34b5517f6ef" (UID: "eb267cf2-7c07-4014-b06c-d34b5517f6ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.387398 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb267cf2-7c07-4014-b06c-d34b5517f6ef-kube-api-access-vlqsk" (OuterVolumeSpecName: "kube-api-access-vlqsk") pod "eb267cf2-7c07-4014-b06c-d34b5517f6ef" (UID: "eb267cf2-7c07-4014-b06c-d34b5517f6ef"). InnerVolumeSpecName "kube-api-access-vlqsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.426752 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eb267cf2-7c07-4014-b06c-d34b5517f6ef" (UID: "eb267cf2-7c07-4014-b06c-d34b5517f6ef"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.434907 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "eb267cf2-7c07-4014-b06c-d34b5517f6ef" (UID: "eb267cf2-7c07-4014-b06c-d34b5517f6ef"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.469759 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb267cf2-7c07-4014-b06c-d34b5517f6ef" (UID: "eb267cf2-7c07-4014-b06c-d34b5517f6ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.484909 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlqsk\" (UniqueName: \"kubernetes.io/projected/eb267cf2-7c07-4014-b06c-d34b5517f6ef-kube-api-access-vlqsk\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.484949 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.484961 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.484975 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb267cf2-7c07-4014-b06c-d34b5517f6ef-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.484989 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.484999 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.499567 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-config-data" (OuterVolumeSpecName: "config-data") pod "eb267cf2-7c07-4014-b06c-d34b5517f6ef" (UID: "eb267cf2-7c07-4014-b06c-d34b5517f6ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.550931 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:56 crc kubenswrapper[4796]: I1202 20:39:56.587111 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb267cf2-7c07-4014-b06c-d34b5517f6ef-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.185996 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"eb267cf2-7c07-4014-b06c-d34b5517f6ef","Type":"ContainerDied","Data":"e5c97a307dcb34bcdad5a6c8db4563a4a02325b0345f80e6eb1eca8bb8f683a6"} Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.186355 4796 scope.go:117] "RemoveContainer" containerID="d7edf9597ac38b23c14d700eaffd57b76e17f7c411925bf6126fdf1bf558f509" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.186022 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.190017 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"93ee5017-b7fa-4c09-add3-76d393bb0e2b","Type":"ContainerStarted","Data":"fd41b87285f93d26194d0bfa1a42aacc254df4deb468734218400b3c53fbf0f3"} Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.190138 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.216210 4796 scope.go:117] "RemoveContainer" containerID="b6c6d0e03d805566a0c0eccc3cd3697d4657d5892d1ce0183136eb45ca5f97bc" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.238395 4796 scope.go:117] "RemoveContainer" containerID="c79c2437589085545392c859ea7ed1cb123d3a14a3fc0328d321a1772532ebda" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.240981 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-api-0" podStartSLOduration=3.240956488 podStartE2EDuration="3.240956488s" podCreationTimestamp="2025-12-02 20:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:39:57.215342313 +0000 UTC m=+1680.218717857" watchObservedRunningTime="2025-12-02 20:39:57.240956488 +0000 UTC m=+1680.244332022" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.242689 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.250011 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.278531 4796 scope.go:117] "RemoveContainer" containerID="9e1578ae6ebfd9c9ca24e454feee5ed591834d017686b2616813fdf39aa57487" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.287065 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" path="/var/lib/kubelet/pods/eb267cf2-7c07-4014-b06c-d34b5517f6ef/volumes" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.287819 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:39:57 crc kubenswrapper[4796]: E1202 20:39:57.288102 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="sg-core" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.288120 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="sg-core" Dec 02 20:39:57 crc kubenswrapper[4796]: E1202 20:39:57.288160 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="ceilometer-notification-agent" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.288168 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="ceilometer-notification-agent" Dec 02 20:39:57 crc kubenswrapper[4796]: E1202 20:39:57.288182 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="ceilometer-central-agent" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.288188 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="ceilometer-central-agent" Dec 02 20:39:57 crc kubenswrapper[4796]: E1202 20:39:57.288198 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="proxy-httpd" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.288204 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="proxy-httpd" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.288371 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="proxy-httpd" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.288389 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="sg-core" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.288404 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="ceilometer-central-agent" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.288415 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb267cf2-7c07-4014-b06c-d34b5517f6ef" containerName="ceilometer-notification-agent" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.290175 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.295838 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.304755 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.304792 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.304994 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.306379 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9r86\" (UniqueName: \"kubernetes.io/projected/b630568c-0328-4374-9115-8ba4633b36d9-kube-api-access-h9r86\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.306513 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-config-data\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.306571 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b630568c-0328-4374-9115-8ba4633b36d9-run-httpd\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.306600 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-scripts\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.315078 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.316811 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b630568c-0328-4374-9115-8ba4633b36d9-log-httpd\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.316930 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.317079 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.438101 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9r86\" (UniqueName: \"kubernetes.io/projected/b630568c-0328-4374-9115-8ba4633b36d9-kube-api-access-h9r86\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.438243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-config-data\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.438327 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b630568c-0328-4374-9115-8ba4633b36d9-run-httpd\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.438353 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-scripts\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.438397 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.438486 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b630568c-0328-4374-9115-8ba4633b36d9-log-httpd\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.438572 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.438663 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.441235 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b630568c-0328-4374-9115-8ba4633b36d9-run-httpd\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.442844 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b630568c-0328-4374-9115-8ba4633b36d9-log-httpd\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.471945 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.471971 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9r86\" (UniqueName: \"kubernetes.io/projected/b630568c-0328-4374-9115-8ba4633b36d9-kube-api-access-h9r86\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.493602 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-config-data\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.494595 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.506712 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.510536 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-scripts\") pod \"ceilometer-0\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.622623 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:39:57 crc kubenswrapper[4796]: I1202 20:39:57.743828 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:58 crc kubenswrapper[4796]: I1202 20:39:58.158854 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:39:58 crc kubenswrapper[4796]: I1202 20:39:58.205641 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b630568c-0328-4374-9115-8ba4633b36d9","Type":"ContainerStarted","Data":"45484b03e28148c56c541bf820fa8da2e13499321317a9bf40b723b1e4008327"} Dec 02 20:39:58 crc kubenswrapper[4796]: I1202 20:39:58.959939 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:39:58 crc kubenswrapper[4796]: I1202 20:39:58.968695 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:39:58 crc kubenswrapper[4796]: I1202 20:39:58.972802 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:39:59 crc kubenswrapper[4796]: I1202 20:39:59.038764 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Dec 02 20:39:59 crc kubenswrapper[4796]: I1202 20:39:59.053095 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Dec 02 20:39:59 crc kubenswrapper[4796]: I1202 20:39:59.218562 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b630568c-0328-4374-9115-8ba4633b36d9","Type":"ContainerStarted","Data":"ce50f47769ee4617aeb5d5476925c4d69eccf9c166f910180d59ee35852f47bf"} Dec 02 20:39:59 crc kubenswrapper[4796]: I1202 20:39:59.218823 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="73f2d7db-1725-4ebd-9cda-d3eba7d26842" containerName="cinder-scheduler" containerID="cri-o://f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71" gracePeriod=30 Dec 02 20:39:59 crc kubenswrapper[4796]: I1202 20:39:59.218912 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="73f2d7db-1725-4ebd-9cda-d3eba7d26842" containerName="probe" containerID="cri-o://4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6" gracePeriod=30 Dec 02 20:39:59 crc kubenswrapper[4796]: I1202 20:39:59.218939 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="c58807c0-69fa-43dd-9ce9-e695e624ac61" containerName="cinder-backup" containerID="cri-o://ac0089c2b490b7feae34a1795b8a7fa15ec45f3f2ee8b34d17e4030fc3cab0e8" gracePeriod=30 Dec 02 20:39:59 crc kubenswrapper[4796]: I1202 20:39:59.218997 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="c58807c0-69fa-43dd-9ce9-e695e624ac61" containerName="probe" containerID="cri-o://52ec871b5abe7425a84adcf51d297c66cae98a3fee6102f4a7771787f26607e9" gracePeriod=30 Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.165910 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.234880 4796 generic.go:334] "Generic (PLEG): container finished" podID="73f2d7db-1725-4ebd-9cda-d3eba7d26842" containerID="4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6" exitCode=0 Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.235387 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"73f2d7db-1725-4ebd-9cda-d3eba7d26842","Type":"ContainerDied","Data":"4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6"} Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.242836 4796 generic.go:334] "Generic (PLEG): container finished" podID="c58807c0-69fa-43dd-9ce9-e695e624ac61" containerID="52ec871b5abe7425a84adcf51d297c66cae98a3fee6102f4a7771787f26607e9" exitCode=0 Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.242880 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"c58807c0-69fa-43dd-9ce9-e695e624ac61","Type":"ContainerDied","Data":"52ec871b5abe7425a84adcf51d297c66cae98a3fee6102f4a7771787f26607e9"} Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.242955 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"c58807c0-69fa-43dd-9ce9-e695e624ac61","Type":"ContainerDied","Data":"ac0089c2b490b7feae34a1795b8a7fa15ec45f3f2ee8b34d17e4030fc3cab0e8"} Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.242895 4796 generic.go:334] "Generic (PLEG): container finished" podID="c58807c0-69fa-43dd-9ce9-e695e624ac61" containerID="ac0089c2b490b7feae34a1795b8a7fa15ec45f3f2ee8b34d17e4030fc3cab0e8" exitCode=0 Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.246017 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b630568c-0328-4374-9115-8ba4633b36d9","Type":"ContainerStarted","Data":"f885e657b96e1bb5e03368bbd881c61f8a55926395c495a4cecd813bcb41028a"} Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.596488 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.718124 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-config-data\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.718228 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-iscsi\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.718279 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-sys\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.718315 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-locks-cinder\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.718347 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-cert-memcached-mtls\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.720426 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-config-data-custom\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.720490 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-lib-modules\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.720519 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-run\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.720551 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-locks-brick\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.720565 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-nvme\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.720585 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-scripts\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.720609 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-dev\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.720633 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-machine-id\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.720663 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-combined-ca-bundle\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.720680 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-lib-cinder\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.720710 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwcj8\" (UniqueName: \"kubernetes.io/projected/c58807c0-69fa-43dd-9ce9-e695e624ac61-kube-api-access-rwcj8\") pod \"c58807c0-69fa-43dd-9ce9-e695e624ac61\" (UID: \"c58807c0-69fa-43dd-9ce9-e695e624ac61\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.721597 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.721679 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.721719 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-sys" (OuterVolumeSpecName: "sys") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.721762 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.721727 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.721815 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-dev" (OuterVolumeSpecName: "dev") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.722466 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-run" (OuterVolumeSpecName: "run") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.722505 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.722534 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.722562 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.725781 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-scripts" (OuterVolumeSpecName: "scripts") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.738303 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c58807c0-69fa-43dd-9ce9-e695e624ac61-kube-api-access-rwcj8" (OuterVolumeSpecName: "kube-api-access-rwcj8") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "kube-api-access-rwcj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.742486 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.795475 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824472 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824510 4796 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-lib-modules\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824521 4796 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-run\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824531 4796 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-locks-brick\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824539 4796 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-nvme\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824548 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824556 4796 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-dev\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824563 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824570 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824578 4796 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824585 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwcj8\" (UniqueName: \"kubernetes.io/projected/c58807c0-69fa-43dd-9ce9-e695e624ac61-kube-api-access-rwcj8\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824597 4796 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-etc-iscsi\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824605 4796 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-sys\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.824613 4796 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c58807c0-69fa-43dd-9ce9-e695e624ac61-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.832655 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.832778 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-config-data" (OuterVolumeSpecName: "config-data") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.927930 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-custom-prometheus-ca\") pod \"de9cf93c-00d2-489c-9ea7-e006e692e9be\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.928205 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-config-data\") pod \"de9cf93c-00d2-489c-9ea7-e006e692e9be\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.928365 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de9cf93c-00d2-489c-9ea7-e006e692e9be-logs\") pod \"de9cf93c-00d2-489c-9ea7-e006e692e9be\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.928464 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hzlb\" (UniqueName: \"kubernetes.io/projected/de9cf93c-00d2-489c-9ea7-e006e692e9be-kube-api-access-7hzlb\") pod \"de9cf93c-00d2-489c-9ea7-e006e692e9be\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.928559 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-cert-memcached-mtls\") pod \"de9cf93c-00d2-489c-9ea7-e006e692e9be\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.928660 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-combined-ca-bundle\") pod \"de9cf93c-00d2-489c-9ea7-e006e692e9be\" (UID: \"de9cf93c-00d2-489c-9ea7-e006e692e9be\") " Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.929006 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.929340 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de9cf93c-00d2-489c-9ea7-e006e692e9be-logs" (OuterVolumeSpecName: "logs") pod "de9cf93c-00d2-489c-9ea7-e006e692e9be" (UID: "de9cf93c-00d2-489c-9ea7-e006e692e9be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.932737 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9cf93c-00d2-489c-9ea7-e006e692e9be-kube-api-access-7hzlb" (OuterVolumeSpecName: "kube-api-access-7hzlb") pod "de9cf93c-00d2-489c-9ea7-e006e692e9be" (UID: "de9cf93c-00d2-489c-9ea7-e006e692e9be"). InnerVolumeSpecName "kube-api-access-7hzlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.958720 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de9cf93c-00d2-489c-9ea7-e006e692e9be" (UID: "de9cf93c-00d2-489c-9ea7-e006e692e9be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.960096 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "de9cf93c-00d2-489c-9ea7-e006e692e9be" (UID: "de9cf93c-00d2-489c-9ea7-e006e692e9be"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.960759 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "c58807c0-69fa-43dd-9ce9-e695e624ac61" (UID: "c58807c0-69fa-43dd-9ce9-e695e624ac61"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:00 crc kubenswrapper[4796]: I1202 20:40:00.978573 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-config-data" (OuterVolumeSpecName: "config-data") pod "de9cf93c-00d2-489c-9ea7-e006e692e9be" (UID: "de9cf93c-00d2-489c-9ea7-e006e692e9be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.020517 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "de9cf93c-00d2-489c-9ea7-e006e692e9be" (UID: "de9cf93c-00d2-489c-9ea7-e006e692e9be"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.032033 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hzlb\" (UniqueName: \"kubernetes.io/projected/de9cf93c-00d2-489c-9ea7-e006e692e9be-kube-api-access-7hzlb\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.032071 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.032083 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c58807c0-69fa-43dd-9ce9-e695e624ac61-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.032093 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.032104 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.032119 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9cf93c-00d2-489c-9ea7-e006e692e9be-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.032129 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de9cf93c-00d2-489c-9ea7-e006e692e9be-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.267279 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.273443 4796 generic.go:334] "Generic (PLEG): container finished" podID="de9cf93c-00d2-489c-9ea7-e006e692e9be" containerID="72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba" exitCode=0 Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.273560 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.302513 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"c58807c0-69fa-43dd-9ce9-e695e624ac61","Type":"ContainerDied","Data":"af9a8d31eed3dd9e1db0b1cd34fafe2d3e37ab10bde977feb6a8a1ea581b310f"} Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.302599 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b630568c-0328-4374-9115-8ba4633b36d9","Type":"ContainerStarted","Data":"ce5daf3697504ebdf29a52e70e51dd8ea5e18f0825c6700427dd2a3751e1d621"} Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.302628 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"de9cf93c-00d2-489c-9ea7-e006e692e9be","Type":"ContainerDied","Data":"72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba"} Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.302653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"de9cf93c-00d2-489c-9ea7-e006e692e9be","Type":"ContainerDied","Data":"42591179de814cfd3392406703b2d2c529be76a15654f62f2e6e8f034677ae7e"} Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.302696 4796 scope.go:117] "RemoveContainer" containerID="52ec871b5abe7425a84adcf51d297c66cae98a3fee6102f4a7771787f26607e9" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.303914 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.332926 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.361677 4796 scope.go:117] "RemoveContainer" containerID="ac0089c2b490b7feae34a1795b8a7fa15ec45f3f2ee8b34d17e4030fc3cab0e8" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.373472 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_de9cf93c-00d2-489c-9ea7-e006e692e9be/watcher-decision-engine/0.log" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.380039 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.393320 4796 scope.go:117] "RemoveContainer" containerID="72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.394071 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.404228 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Dec 02 20:40:01 crc kubenswrapper[4796]: E1202 20:40:01.404831 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9cf93c-00d2-489c-9ea7-e006e692e9be" containerName="watcher-decision-engine" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.404869 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9cf93c-00d2-489c-9ea7-e006e692e9be" containerName="watcher-decision-engine" Dec 02 20:40:01 crc kubenswrapper[4796]: E1202 20:40:01.404886 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58807c0-69fa-43dd-9ce9-e695e624ac61" containerName="cinder-backup" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.404898 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58807c0-69fa-43dd-9ce9-e695e624ac61" containerName="cinder-backup" Dec 02 20:40:01 crc kubenswrapper[4796]: E1202 20:40:01.404946 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58807c0-69fa-43dd-9ce9-e695e624ac61" containerName="probe" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.404958 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58807c0-69fa-43dd-9ce9-e695e624ac61" containerName="probe" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.405315 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="de9cf93c-00d2-489c-9ea7-e006e692e9be" containerName="watcher-decision-engine" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.405382 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58807c0-69fa-43dd-9ce9-e695e624ac61" containerName="probe" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.405407 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58807c0-69fa-43dd-9ce9-e695e624ac61" containerName="cinder-backup" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.411196 4796 scope.go:117] "RemoveContainer" containerID="72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba" Dec 02 20:40:01 crc kubenswrapper[4796]: E1202 20:40:01.411671 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba\": container with ID starting with 72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba not found: ID does not exist" containerID="72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.411704 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba"} err="failed to get container status \"72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba\": rpc error: code = NotFound desc = could not find container \"72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba\": container with ID starting with 72c5154c874e21947b0871a57b47e631a1c2c886eb2f8a6e6651642f4694abba not found: ID does not exist" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.422713 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.423835 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.429020 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.434271 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.434371 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.435613 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.440063 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.541053 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-config-data\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.541318 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.541420 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-sys\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.541518 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.541689 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.541890 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb03211-0ce8-4dde-96ec-6692589cd895-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542067 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgk4c\" (UniqueName: \"kubernetes.io/projected/6fb03211-0ce8-4dde-96ec-6692589cd895-kube-api-access-jgk4c\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542202 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-config-data-custom\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542273 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542310 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542340 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddf5\" (UniqueName: \"kubernetes.io/projected/6c355ccd-843e-46c4-93d0-bd00e17867e8-kube-api-access-mddf5\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542399 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-lib-modules\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542462 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542496 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542524 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-scripts\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542575 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542628 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-dev\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542669 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-run\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542719 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542801 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-nvme\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542878 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.542898 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644112 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-dev\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644500 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-run\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644521 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644542 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-nvme\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644572 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644592 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644634 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-config-data\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644651 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-sys\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644686 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644705 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644732 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb03211-0ce8-4dde-96ec-6692589cd895-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644761 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgk4c\" (UniqueName: \"kubernetes.io/projected/6fb03211-0ce8-4dde-96ec-6692589cd895-kube-api-access-jgk4c\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644797 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-config-data-custom\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644820 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644841 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644858 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddf5\" (UniqueName: \"kubernetes.io/projected/6c355ccd-843e-46c4-93d0-bd00e17867e8-kube-api-access-mddf5\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644876 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-lib-modules\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644894 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644910 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-scripts\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644927 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644947 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.645027 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.644340 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-dev\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.645067 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-run\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.645748 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb03211-0ce8-4dde-96ec-6692589cd895-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.645829 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-nvme\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.647786 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.648166 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-lib-modules\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.648305 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.648402 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.648443 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-sys\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.650018 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.658629 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.660826 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.668908 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-scripts\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.669393 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.671513 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-config-data\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.681917 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgk4c\" (UniqueName: \"kubernetes.io/projected/6fb03211-0ce8-4dde-96ec-6692589cd895-kube-api-access-jgk4c\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.690937 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.694891 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.698817 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-config-data-custom\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.706473 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.709253 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddf5\" (UniqueName: \"kubernetes.io/projected/6c355ccd-843e-46c4-93d0-bd00e17867e8-kube-api-access-mddf5\") pod \"cinder-backup-0\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.822042 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:01 crc kubenswrapper[4796]: I1202 20:40:01.831485 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:02 crc kubenswrapper[4796]: I1202 20:40:02.287618 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b630568c-0328-4374-9115-8ba4633b36d9","Type":"ContainerStarted","Data":"507699ff02d6cab9df8266a4c9169940352a204f4bb96cad363a7633c96e6961"} Dec 02 20:40:02 crc kubenswrapper[4796]: I1202 20:40:02.288961 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:02 crc kubenswrapper[4796]: I1202 20:40:02.308407 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.8971809670000002 podStartE2EDuration="5.308390645s" podCreationTimestamp="2025-12-02 20:39:57 +0000 UTC" firstStartedPulling="2025-12-02 20:39:58.160367082 +0000 UTC m=+1681.163742646" lastFinishedPulling="2025-12-02 20:40:01.57157679 +0000 UTC m=+1684.574952324" observedRunningTime="2025-12-02 20:40:02.305233578 +0000 UTC m=+1685.308609122" watchObservedRunningTime="2025-12-02 20:40:02.308390645 +0000 UTC m=+1685.311766189" Dec 02 20:40:02 crc kubenswrapper[4796]: I1202 20:40:02.371167 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:02 crc kubenswrapper[4796]: W1202 20:40:02.373806 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fb03211_0ce8_4dde_96ec_6692589cd895.slice/crio-991feba09951dc3221c3e54b8293530c07dd8d2386306151c398e20faa47d0ba WatchSource:0}: Error finding container 991feba09951dc3221c3e54b8293530c07dd8d2386306151c398e20faa47d0ba: Status 404 returned error can't find the container with id 991feba09951dc3221c3e54b8293530c07dd8d2386306151c398e20faa47d0ba Dec 02 20:40:02 crc kubenswrapper[4796]: I1202 20:40:02.433344 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Dec 02 20:40:02 crc kubenswrapper[4796]: W1202 20:40:02.436744 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c355ccd_843e_46c4_93d0_bd00e17867e8.slice/crio-c72dea865b15f9f80debfdfdd1bc92485569ec01a86833b87950eb32da758b9e WatchSource:0}: Error finding container c72dea865b15f9f80debfdfdd1bc92485569ec01a86833b87950eb32da758b9e: Status 404 returned error can't find the container with id c72dea865b15f9f80debfdfdd1bc92485569ec01a86833b87950eb32da758b9e Dec 02 20:40:03 crc kubenswrapper[4796]: I1202 20:40:03.294117 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c58807c0-69fa-43dd-9ce9-e695e624ac61" path="/var/lib/kubelet/pods/c58807c0-69fa-43dd-9ce9-e695e624ac61/volumes" Dec 02 20:40:03 crc kubenswrapper[4796]: I1202 20:40:03.295835 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9cf93c-00d2-489c-9ea7-e006e692e9be" path="/var/lib/kubelet/pods/de9cf93c-00d2-489c-9ea7-e006e692e9be/volumes" Dec 02 20:40:03 crc kubenswrapper[4796]: I1202 20:40:03.311750 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"6c355ccd-843e-46c4-93d0-bd00e17867e8","Type":"ContainerStarted","Data":"94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0"} Dec 02 20:40:03 crc kubenswrapper[4796]: I1202 20:40:03.311815 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"6c355ccd-843e-46c4-93d0-bd00e17867e8","Type":"ContainerStarted","Data":"bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999"} Dec 02 20:40:03 crc kubenswrapper[4796]: I1202 20:40:03.311825 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"6c355ccd-843e-46c4-93d0-bd00e17867e8","Type":"ContainerStarted","Data":"c72dea865b15f9f80debfdfdd1bc92485569ec01a86833b87950eb32da758b9e"} Dec 02 20:40:03 crc kubenswrapper[4796]: I1202 20:40:03.317337 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"6fb03211-0ce8-4dde-96ec-6692589cd895","Type":"ContainerStarted","Data":"f73d9e9a00b60d8a61dd714a5e0b8347b03577df9dba425911db7b5e0b3b0659"} Dec 02 20:40:03 crc kubenswrapper[4796]: I1202 20:40:03.317415 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"6fb03211-0ce8-4dde-96ec-6692589cd895","Type":"ContainerStarted","Data":"991feba09951dc3221c3e54b8293530c07dd8d2386306151c398e20faa47d0ba"} Dec 02 20:40:03 crc kubenswrapper[4796]: I1202 20:40:03.407908 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=2.407879827 podStartE2EDuration="2.407879827s" podCreationTimestamp="2025-12-02 20:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:40:03.341809539 +0000 UTC m=+1686.345185073" watchObservedRunningTime="2025-12-02 20:40:03.407879827 +0000 UTC m=+1686.411255361" Dec 02 20:40:03 crc kubenswrapper[4796]: I1202 20:40:03.414185 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.414175728 podStartE2EDuration="2.414175728s" podCreationTimestamp="2025-12-02 20:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:40:03.361235375 +0000 UTC m=+1686.364610919" watchObservedRunningTime="2025-12-02 20:40:03.414175728 +0000 UTC m=+1686.417551262" Dec 02 20:40:03 crc kubenswrapper[4796]: I1202 20:40:03.705747 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.015942 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.119704 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-config-data-custom\") pod \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.119833 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-combined-ca-bundle\") pod \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.119879 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-scripts\") pod \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.119920 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blz56\" (UniqueName: \"kubernetes.io/projected/73f2d7db-1725-4ebd-9cda-d3eba7d26842-kube-api-access-blz56\") pod \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.120014 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-config-data\") pod \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.120063 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73f2d7db-1725-4ebd-9cda-d3eba7d26842-etc-machine-id\") pod \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.120104 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-cert-memcached-mtls\") pod \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\" (UID: \"73f2d7db-1725-4ebd-9cda-d3eba7d26842\") " Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.122053 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73f2d7db-1725-4ebd-9cda-d3eba7d26842-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "73f2d7db-1725-4ebd-9cda-d3eba7d26842" (UID: "73f2d7db-1725-4ebd-9cda-d3eba7d26842"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.131406 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "73f2d7db-1725-4ebd-9cda-d3eba7d26842" (UID: "73f2d7db-1725-4ebd-9cda-d3eba7d26842"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.134035 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-scripts" (OuterVolumeSpecName: "scripts") pod "73f2d7db-1725-4ebd-9cda-d3eba7d26842" (UID: "73f2d7db-1725-4ebd-9cda-d3eba7d26842"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.139624 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73f2d7db-1725-4ebd-9cda-d3eba7d26842-kube-api-access-blz56" (OuterVolumeSpecName: "kube-api-access-blz56") pod "73f2d7db-1725-4ebd-9cda-d3eba7d26842" (UID: "73f2d7db-1725-4ebd-9cda-d3eba7d26842"). InnerVolumeSpecName "kube-api-access-blz56". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.183393 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73f2d7db-1725-4ebd-9cda-d3eba7d26842" (UID: "73f2d7db-1725-4ebd-9cda-d3eba7d26842"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.222432 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73f2d7db-1725-4ebd-9cda-d3eba7d26842-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.222461 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.222470 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.222479 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.222488 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blz56\" (UniqueName: \"kubernetes.io/projected/73f2d7db-1725-4ebd-9cda-d3eba7d26842-kube-api-access-blz56\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.251457 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-config-data" (OuterVolumeSpecName: "config-data") pod "73f2d7db-1725-4ebd-9cda-d3eba7d26842" (UID: "73f2d7db-1725-4ebd-9cda-d3eba7d26842"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.325831 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.329446 4796 generic.go:334] "Generic (PLEG): container finished" podID="73f2d7db-1725-4ebd-9cda-d3eba7d26842" containerID="f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71" exitCode=0 Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.330146 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.330722 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"73f2d7db-1725-4ebd-9cda-d3eba7d26842","Type":"ContainerDied","Data":"f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71"} Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.330745 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"73f2d7db-1725-4ebd-9cda-d3eba7d26842","Type":"ContainerDied","Data":"11d2792a8eb82de82f1307463f25d75c7c71de45ec2717617f1c0ae4ceb246cb"} Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.330760 4796 scope.go:117] "RemoveContainer" containerID="4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.341380 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "73f2d7db-1725-4ebd-9cda-d3eba7d26842" (UID: "73f2d7db-1725-4ebd-9cda-d3eba7d26842"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.369381 4796 scope.go:117] "RemoveContainer" containerID="f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.388245 4796 scope.go:117] "RemoveContainer" containerID="4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6" Dec 02 20:40:04 crc kubenswrapper[4796]: E1202 20:40:04.388616 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6\": container with ID starting with 4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6 not found: ID does not exist" containerID="4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.388648 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6"} err="failed to get container status \"4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6\": rpc error: code = NotFound desc = could not find container \"4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6\": container with ID starting with 4cc753275983855c1372daf17009a9638bc5a5ba95ee1324a27acb829ebb7df6 not found: ID does not exist" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.388668 4796 scope.go:117] "RemoveContainer" containerID="f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71" Dec 02 20:40:04 crc kubenswrapper[4796]: E1202 20:40:04.389125 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71\": container with ID starting with f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71 not found: ID does not exist" containerID="f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.389241 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71"} err="failed to get container status \"f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71\": rpc error: code = NotFound desc = could not find container \"f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71\": container with ID starting with f219fed746bc252bac3e0b230392580053230da0a83c2167f6eec181dafe7b71 not found: ID does not exist" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.427304 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/73f2d7db-1725-4ebd-9cda-d3eba7d26842-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.678045 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.687395 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.704741 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Dec 02 20:40:04 crc kubenswrapper[4796]: E1202 20:40:04.705165 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73f2d7db-1725-4ebd-9cda-d3eba7d26842" containerName="cinder-scheduler" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.705188 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f2d7db-1725-4ebd-9cda-d3eba7d26842" containerName="cinder-scheduler" Dec 02 20:40:04 crc kubenswrapper[4796]: E1202 20:40:04.705226 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73f2d7db-1725-4ebd-9cda-d3eba7d26842" containerName="probe" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.705236 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f2d7db-1725-4ebd-9cda-d3eba7d26842" containerName="probe" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.705454 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="73f2d7db-1725-4ebd-9cda-d3eba7d26842" containerName="probe" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.705484 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="73f2d7db-1725-4ebd-9cda-d3eba7d26842" containerName="cinder-scheduler" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.706902 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.711321 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.732890 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.733619 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsght\" (UniqueName: \"kubernetes.io/projected/43d6700e-9065-436f-aa67-68590f89f57d-kube-api-access-hsght\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.733711 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-config-data\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.733778 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.733818 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-scripts\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.733849 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.733881 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.733906 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43d6700e-9065-436f-aa67-68590f89f57d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.836694 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsght\" (UniqueName: \"kubernetes.io/projected/43d6700e-9065-436f-aa67-68590f89f57d-kube-api-access-hsght\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.836773 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-config-data\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.836819 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.836855 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-scripts\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.836878 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.836904 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.836928 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43d6700e-9065-436f-aa67-68590f89f57d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.837027 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43d6700e-9065-436f-aa67-68590f89f57d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.843024 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.843599 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.844965 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.845448 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-scripts\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.858143 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-config-data\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.858922 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsght\" (UniqueName: \"kubernetes.io/projected/43d6700e-9065-436f-aa67-68590f89f57d-kube-api-access-hsght\") pod \"cinder-scheduler-0\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:04 crc kubenswrapper[4796]: I1202 20:40:04.897647 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:05 crc kubenswrapper[4796]: I1202 20:40:05.021900 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:05 crc kubenswrapper[4796]: I1202 20:40:05.278351 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73f2d7db-1725-4ebd-9cda-d3eba7d26842" path="/var/lib/kubelet/pods/73f2d7db-1725-4ebd-9cda-d3eba7d26842/volumes" Dec 02 20:40:05 crc kubenswrapper[4796]: I1202 20:40:05.518905 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Dec 02 20:40:06 crc kubenswrapper[4796]: I1202 20:40:06.105109 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:06 crc kubenswrapper[4796]: I1202 20:40:06.361094 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"43d6700e-9065-436f-aa67-68590f89f57d","Type":"ContainerStarted","Data":"a258540b5af5fee92f2bf3511f9559eb722b0777a8562a8a73987c2845e6126a"} Dec 02 20:40:06 crc kubenswrapper[4796]: I1202 20:40:06.361137 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"43d6700e-9065-436f-aa67-68590f89f57d","Type":"ContainerStarted","Data":"ba8d60af41e6af675a0308a34266764e98a2077aca58e3cf1865dcf2cafb03fc"} Dec 02 20:40:06 crc kubenswrapper[4796]: I1202 20:40:06.822577 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:07 crc kubenswrapper[4796]: I1202 20:40:07.299930 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:40:07 crc kubenswrapper[4796]: E1202 20:40:07.300290 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:40:07 crc kubenswrapper[4796]: I1202 20:40:07.343815 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:07 crc kubenswrapper[4796]: I1202 20:40:07.377578 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"43d6700e-9065-436f-aa67-68590f89f57d","Type":"ContainerStarted","Data":"68eadbae36740b0287f189b80c3081837a7560316c787c83a00455c2c608d386"} Dec 02 20:40:07 crc kubenswrapper[4796]: I1202 20:40:07.409386 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=3.409362548 podStartE2EDuration="3.409362548s" podCreationTimestamp="2025-12-02 20:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:40:07.402109033 +0000 UTC m=+1690.405484577" watchObservedRunningTime="2025-12-02 20:40:07.409362548 +0000 UTC m=+1690.412738082" Dec 02 20:40:07 crc kubenswrapper[4796]: I1202 20:40:07.454991 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:40:08 crc kubenswrapper[4796]: I1202 20:40:08.586898 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:09 crc kubenswrapper[4796]: I1202 20:40:09.837627 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:10 crc kubenswrapper[4796]: I1202 20:40:10.022335 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:11 crc kubenswrapper[4796]: I1202 20:40:11.085772 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:11 crc kubenswrapper[4796]: I1202 20:40:11.832129 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:11 crc kubenswrapper[4796]: I1202 20:40:11.883174 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:12 crc kubenswrapper[4796]: I1202 20:40:12.110162 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:12 crc kubenswrapper[4796]: I1202 20:40:12.331466 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:12 crc kubenswrapper[4796]: I1202 20:40:12.426876 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:12 crc kubenswrapper[4796]: I1202 20:40:12.454177 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:13 crc kubenswrapper[4796]: I1202 20:40:13.514866 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:13 crc kubenswrapper[4796]: I1202 20:40:13.765975 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:13 crc kubenswrapper[4796]: I1202 20:40:13.896978 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-pdjpd"] Dec 02 20:40:13 crc kubenswrapper[4796]: I1202 20:40:13.903952 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-pdjpd"] Dec 02 20:40:13 crc kubenswrapper[4796]: I1202 20:40:13.935721 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Dec 02 20:40:13 crc kubenswrapper[4796]: I1202 20:40:13.935996 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="6c355ccd-843e-46c4-93d0-bd00e17867e8" containerName="cinder-backup" containerID="cri-o://bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999" gracePeriod=30 Dec 02 20:40:13 crc kubenswrapper[4796]: I1202 20:40:13.936156 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="6c355ccd-843e-46c4-93d0-bd00e17867e8" containerName="probe" containerID="cri-o://94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0" gracePeriod=30 Dec 02 20:40:13 crc kubenswrapper[4796]: I1202 20:40:13.981957 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Dec 02 20:40:13 crc kubenswrapper[4796]: I1202 20:40:13.982271 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="43d6700e-9065-436f-aa67-68590f89f57d" containerName="cinder-scheduler" containerID="cri-o://a258540b5af5fee92f2bf3511f9559eb722b0777a8562a8a73987c2845e6126a" gracePeriod=30 Dec 02 20:40:13 crc kubenswrapper[4796]: I1202 20:40:13.982386 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="43d6700e-9065-436f-aa67-68590f89f57d" containerName="probe" containerID="cri-o://68eadbae36740b0287f189b80c3081837a7560316c787c83a00455c2c608d386" gracePeriod=30 Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.000311 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.000608 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="93ee5017-b7fa-4c09-add3-76d393bb0e2b" containerName="cinder-api-log" containerID="cri-o://10cdff450040cf2e920bff80607829f00c868d8cbff434179b98562a59be0731" gracePeriod=30 Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.000729 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="93ee5017-b7fa-4c09-add3-76d393bb0e2b" containerName="cinder-api" containerID="cri-o://fd41b87285f93d26194d0bfa1a42aacc254df4deb468734218400b3c53fbf0f3" gracePeriod=30 Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.010199 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder4f01-account-delete-rd9rw"] Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.011233 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.020302 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder4f01-account-delete-rd9rw"] Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.037884 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8108c599-5ca6-42d3-b1f5-1b86a40b4560-operator-scripts\") pod \"cinder4f01-account-delete-rd9rw\" (UID: \"8108c599-5ca6-42d3-b1f5-1b86a40b4560\") " pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.037960 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg55w\" (UniqueName: \"kubernetes.io/projected/8108c599-5ca6-42d3-b1f5-1b86a40b4560-kube-api-access-pg55w\") pod \"cinder4f01-account-delete-rd9rw\" (UID: \"8108c599-5ca6-42d3-b1f5-1b86a40b4560\") " pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.139475 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8108c599-5ca6-42d3-b1f5-1b86a40b4560-operator-scripts\") pod \"cinder4f01-account-delete-rd9rw\" (UID: \"8108c599-5ca6-42d3-b1f5-1b86a40b4560\") " pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.140058 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg55w\" (UniqueName: \"kubernetes.io/projected/8108c599-5ca6-42d3-b1f5-1b86a40b4560-kube-api-access-pg55w\") pod \"cinder4f01-account-delete-rd9rw\" (UID: \"8108c599-5ca6-42d3-b1f5-1b86a40b4560\") " pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.141239 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8108c599-5ca6-42d3-b1f5-1b86a40b4560-operator-scripts\") pod \"cinder4f01-account-delete-rd9rw\" (UID: \"8108c599-5ca6-42d3-b1f5-1b86a40b4560\") " pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.167172 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg55w\" (UniqueName: \"kubernetes.io/projected/8108c599-5ca6-42d3-b1f5-1b86a40b4560-kube-api-access-pg55w\") pod \"cinder4f01-account-delete-rd9rw\" (UID: \"8108c599-5ca6-42d3-b1f5-1b86a40b4560\") " pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.338189 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.458546 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"93ee5017-b7fa-4c09-add3-76d393bb0e2b","Type":"ContainerDied","Data":"10cdff450040cf2e920bff80607829f00c868d8cbff434179b98562a59be0731"} Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.458502 4796 generic.go:334] "Generic (PLEG): container finished" podID="93ee5017-b7fa-4c09-add3-76d393bb0e2b" containerID="10cdff450040cf2e920bff80607829f00c868d8cbff434179b98562a59be0731" exitCode=143 Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.878058 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder4f01-account-delete-rd9rw"] Dec 02 20:40:14 crc kubenswrapper[4796]: I1202 20:40:14.921078 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:15 crc kubenswrapper[4796]: I1202 20:40:15.274415 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="febb424a-fe1a-487d-9c0b-2a7450991431" path="/var/lib/kubelet/pods/febb424a-fe1a-487d-9c0b-2a7450991431/volumes" Dec 02 20:40:15 crc kubenswrapper[4796]: I1202 20:40:15.472167 4796 generic.go:334] "Generic (PLEG): container finished" podID="43d6700e-9065-436f-aa67-68590f89f57d" containerID="68eadbae36740b0287f189b80c3081837a7560316c787c83a00455c2c608d386" exitCode=0 Dec 02 20:40:15 crc kubenswrapper[4796]: I1202 20:40:15.472245 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"43d6700e-9065-436f-aa67-68590f89f57d","Type":"ContainerDied","Data":"68eadbae36740b0287f189b80c3081837a7560316c787c83a00455c2c608d386"} Dec 02 20:40:15 crc kubenswrapper[4796]: I1202 20:40:15.475417 4796 generic.go:334] "Generic (PLEG): container finished" podID="6c355ccd-843e-46c4-93d0-bd00e17867e8" containerID="94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0" exitCode=0 Dec 02 20:40:15 crc kubenswrapper[4796]: I1202 20:40:15.475487 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"6c355ccd-843e-46c4-93d0-bd00e17867e8","Type":"ContainerDied","Data":"94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0"} Dec 02 20:40:15 crc kubenswrapper[4796]: I1202 20:40:15.477991 4796 generic.go:334] "Generic (PLEG): container finished" podID="8108c599-5ca6-42d3-b1f5-1b86a40b4560" containerID="a08852c95b74cda80ada9b1f264df34eab70c672cb51e3d48cee59f9e08fba17" exitCode=0 Dec 02 20:40:15 crc kubenswrapper[4796]: I1202 20:40:15.478068 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" event={"ID":"8108c599-5ca6-42d3-b1f5-1b86a40b4560","Type":"ContainerDied","Data":"a08852c95b74cda80ada9b1f264df34eab70c672cb51e3d48cee59f9e08fba17"} Dec 02 20:40:15 crc kubenswrapper[4796]: I1202 20:40:15.478126 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" event={"ID":"8108c599-5ca6-42d3-b1f5-1b86a40b4560","Type":"ContainerStarted","Data":"1228e3daa3857702607933c1d43ca0d4f2918dcdd78f54b903bf0ac5f1189668"} Dec 02 20:40:15 crc kubenswrapper[4796]: I1202 20:40:15.683992 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:15 crc kubenswrapper[4796]: I1202 20:40:15.684502 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="6fb03211-0ce8-4dde-96ec-6692589cd895" containerName="watcher-decision-engine" containerID="cri-o://f73d9e9a00b60d8a61dd714a5e0b8347b03577df9dba425911db7b5e0b3b0659" gracePeriod=30 Dec 02 20:40:16 crc kubenswrapper[4796]: I1202 20:40:16.135437 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:16 crc kubenswrapper[4796]: I1202 20:40:16.578246 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:40:16 crc kubenswrapper[4796]: I1202 20:40:16.578556 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="ceilometer-central-agent" containerID="cri-o://ce50f47769ee4617aeb5d5476925c4d69eccf9c166f910180d59ee35852f47bf" gracePeriod=30 Dec 02 20:40:16 crc kubenswrapper[4796]: I1202 20:40:16.578613 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="sg-core" containerID="cri-o://ce5daf3697504ebdf29a52e70e51dd8ea5e18f0825c6700427dd2a3751e1d621" gracePeriod=30 Dec 02 20:40:16 crc kubenswrapper[4796]: I1202 20:40:16.578661 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="ceilometer-notification-agent" containerID="cri-o://f885e657b96e1bb5e03368bbd881c61f8a55926395c495a4cecd813bcb41028a" gracePeriod=30 Dec 02 20:40:16 crc kubenswrapper[4796]: I1202 20:40:16.578669 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="proxy-httpd" containerID="cri-o://507699ff02d6cab9df8266a4c9169940352a204f4bb96cad363a7633c96e6961" gracePeriod=30 Dec 02 20:40:16 crc kubenswrapper[4796]: I1202 20:40:16.593306 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:16 crc kubenswrapper[4796]: I1202 20:40:16.904015 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.007412 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg55w\" (UniqueName: \"kubernetes.io/projected/8108c599-5ca6-42d3-b1f5-1b86a40b4560-kube-api-access-pg55w\") pod \"8108c599-5ca6-42d3-b1f5-1b86a40b4560\" (UID: \"8108c599-5ca6-42d3-b1f5-1b86a40b4560\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.007528 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8108c599-5ca6-42d3-b1f5-1b86a40b4560-operator-scripts\") pod \"8108c599-5ca6-42d3-b1f5-1b86a40b4560\" (UID: \"8108c599-5ca6-42d3-b1f5-1b86a40b4560\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.008385 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8108c599-5ca6-42d3-b1f5-1b86a40b4560-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8108c599-5ca6-42d3-b1f5-1b86a40b4560" (UID: "8108c599-5ca6-42d3-b1f5-1b86a40b4560"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.014421 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8108c599-5ca6-42d3-b1f5-1b86a40b4560-kube-api-access-pg55w" (OuterVolumeSpecName: "kube-api-access-pg55w") pod "8108c599-5ca6-42d3-b1f5-1b86a40b4560" (UID: "8108c599-5ca6-42d3-b1f5-1b86a40b4560"). InnerVolumeSpecName "kube-api-access-pg55w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.109246 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg55w\" (UniqueName: \"kubernetes.io/projected/8108c599-5ca6-42d3-b1f5-1b86a40b4560-kube-api-access-pg55w\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.109300 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8108c599-5ca6-42d3-b1f5-1b86a40b4560-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.196937 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/cinder-api-0" podUID="93ee5017-b7fa-4c09-add3-76d393bb0e2b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.199:8776/healthcheck\": read tcp 10.217.0.2:46600->10.217.0.199:8776: read: connection reset by peer" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.305944 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.510028 4796 generic.go:334] "Generic (PLEG): container finished" podID="b630568c-0328-4374-9115-8ba4633b36d9" containerID="507699ff02d6cab9df8266a4c9169940352a204f4bb96cad363a7633c96e6961" exitCode=0 Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.510498 4796 generic.go:334] "Generic (PLEG): container finished" podID="b630568c-0328-4374-9115-8ba4633b36d9" containerID="ce5daf3697504ebdf29a52e70e51dd8ea5e18f0825c6700427dd2a3751e1d621" exitCode=2 Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.510513 4796 generic.go:334] "Generic (PLEG): container finished" podID="b630568c-0328-4374-9115-8ba4633b36d9" containerID="ce50f47769ee4617aeb5d5476925c4d69eccf9c166f910180d59ee35852f47bf" exitCode=0 Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.510587 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b630568c-0328-4374-9115-8ba4633b36d9","Type":"ContainerDied","Data":"507699ff02d6cab9df8266a4c9169940352a204f4bb96cad363a7633c96e6961"} Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.510621 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b630568c-0328-4374-9115-8ba4633b36d9","Type":"ContainerDied","Data":"ce5daf3697504ebdf29a52e70e51dd8ea5e18f0825c6700427dd2a3751e1d621"} Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.510659 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b630568c-0328-4374-9115-8ba4633b36d9","Type":"ContainerDied","Data":"ce50f47769ee4617aeb5d5476925c4d69eccf9c166f910180d59ee35852f47bf"} Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.513583 4796 generic.go:334] "Generic (PLEG): container finished" podID="93ee5017-b7fa-4c09-add3-76d393bb0e2b" containerID="fd41b87285f93d26194d0bfa1a42aacc254df4deb468734218400b3c53fbf0f3" exitCode=0 Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.513648 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"93ee5017-b7fa-4c09-add3-76d393bb0e2b","Type":"ContainerDied","Data":"fd41b87285f93d26194d0bfa1a42aacc254df4deb468734218400b3c53fbf0f3"} Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.530534 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" event={"ID":"8108c599-5ca6-42d3-b1f5-1b86a40b4560","Type":"ContainerDied","Data":"1228e3daa3857702607933c1d43ca0d4f2918dcdd78f54b903bf0ac5f1189668"} Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.530591 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1228e3daa3857702607933c1d43ca0d4f2918dcdd78f54b903bf0ac5f1189668" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.530664 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder4f01-account-delete-rd9rw" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.532310 4796 generic.go:334] "Generic (PLEG): container finished" podID="43d6700e-9065-436f-aa67-68590f89f57d" containerID="a258540b5af5fee92f2bf3511f9559eb722b0777a8562a8a73987c2845e6126a" exitCode=0 Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.532333 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"43d6700e-9065-436f-aa67-68590f89f57d","Type":"ContainerDied","Data":"a258540b5af5fee92f2bf3511f9559eb722b0777a8562a8a73987c2845e6126a"} Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.640768 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.689886 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720330 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mdm5\" (UniqueName: \"kubernetes.io/projected/93ee5017-b7fa-4c09-add3-76d393bb0e2b-kube-api-access-2mdm5\") pod \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720392 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-public-tls-certs\") pod \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720428 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-cert-memcached-mtls\") pod \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720463 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-cert-memcached-mtls\") pod \"43d6700e-9065-436f-aa67-68590f89f57d\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720482 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43d6700e-9065-436f-aa67-68590f89f57d-etc-machine-id\") pod \"43d6700e-9065-436f-aa67-68590f89f57d\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720617 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-scripts\") pod \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720652 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93ee5017-b7fa-4c09-add3-76d393bb0e2b-etc-machine-id\") pod \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720674 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-combined-ca-bundle\") pod \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720715 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5017-b7fa-4c09-add3-76d393bb0e2b-logs\") pod \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720748 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-combined-ca-bundle\") pod \"43d6700e-9065-436f-aa67-68590f89f57d\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720782 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-internal-tls-certs\") pod \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720810 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-config-data-custom\") pod \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720848 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-config-data\") pod \"43d6700e-9065-436f-aa67-68590f89f57d\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720889 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-scripts\") pod \"43d6700e-9065-436f-aa67-68590f89f57d\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720922 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsght\" (UniqueName: \"kubernetes.io/projected/43d6700e-9065-436f-aa67-68590f89f57d-kube-api-access-hsght\") pod \"43d6700e-9065-436f-aa67-68590f89f57d\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720948 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-config-data-custom\") pod \"43d6700e-9065-436f-aa67-68590f89f57d\" (UID: \"43d6700e-9065-436f-aa67-68590f89f57d\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.720992 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-config-data\") pod \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\" (UID: \"93ee5017-b7fa-4c09-add3-76d393bb0e2b\") " Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.724884 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43d6700e-9065-436f-aa67-68590f89f57d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "43d6700e-9065-436f-aa67-68590f89f57d" (UID: "43d6700e-9065-436f-aa67-68590f89f57d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.728632 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93ee5017-b7fa-4c09-add3-76d393bb0e2b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "93ee5017-b7fa-4c09-add3-76d393bb0e2b" (UID: "93ee5017-b7fa-4c09-add3-76d393bb0e2b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.731829 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93ee5017-b7fa-4c09-add3-76d393bb0e2b-logs" (OuterVolumeSpecName: "logs") pod "93ee5017-b7fa-4c09-add3-76d393bb0e2b" (UID: "93ee5017-b7fa-4c09-add3-76d393bb0e2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.731951 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ee5017-b7fa-4c09-add3-76d393bb0e2b-kube-api-access-2mdm5" (OuterVolumeSpecName: "kube-api-access-2mdm5") pod "93ee5017-b7fa-4c09-add3-76d393bb0e2b" (UID: "93ee5017-b7fa-4c09-add3-76d393bb0e2b"). InnerVolumeSpecName "kube-api-access-2mdm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.734417 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-scripts" (OuterVolumeSpecName: "scripts") pod "93ee5017-b7fa-4c09-add3-76d393bb0e2b" (UID: "93ee5017-b7fa-4c09-add3-76d393bb0e2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.755169 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "43d6700e-9065-436f-aa67-68590f89f57d" (UID: "43d6700e-9065-436f-aa67-68590f89f57d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.755547 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-scripts" (OuterVolumeSpecName: "scripts") pod "43d6700e-9065-436f-aa67-68590f89f57d" (UID: "43d6700e-9065-436f-aa67-68590f89f57d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.755785 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "93ee5017-b7fa-4c09-add3-76d393bb0e2b" (UID: "93ee5017-b7fa-4c09-add3-76d393bb0e2b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.755906 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d6700e-9065-436f-aa67-68590f89f57d-kube-api-access-hsght" (OuterVolumeSpecName: "kube-api-access-hsght") pod "43d6700e-9065-436f-aa67-68590f89f57d" (UID: "43d6700e-9065-436f-aa67-68590f89f57d"). InnerVolumeSpecName "kube-api-access-hsght". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.786400 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93ee5017-b7fa-4c09-add3-76d393bb0e2b" (UID: "93ee5017-b7fa-4c09-add3-76d393bb0e2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.802672 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-config-data" (OuterVolumeSpecName: "config-data") pod "93ee5017-b7fa-4c09-add3-76d393bb0e2b" (UID: "93ee5017-b7fa-4c09-add3-76d393bb0e2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.810208 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "93ee5017-b7fa-4c09-add3-76d393bb0e2b" (UID: "93ee5017-b7fa-4c09-add3-76d393bb0e2b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.821790 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43d6700e-9065-436f-aa67-68590f89f57d" (UID: "43d6700e-9065-436f-aa67-68590f89f57d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824571 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824608 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93ee5017-b7fa-4c09-add3-76d393bb0e2b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824626 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824637 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5017-b7fa-4c09-add3-76d393bb0e2b-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824647 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824657 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824666 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824676 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsght\" (UniqueName: \"kubernetes.io/projected/43d6700e-9065-436f-aa67-68590f89f57d-kube-api-access-hsght\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824686 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824695 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824707 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mdm5\" (UniqueName: \"kubernetes.io/projected/93ee5017-b7fa-4c09-add3-76d393bb0e2b-kube-api-access-2mdm5\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824717 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.824725 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43d6700e-9065-436f-aa67-68590f89f57d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.845352 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "93ee5017-b7fa-4c09-add3-76d393bb0e2b" (UID: "93ee5017-b7fa-4c09-add3-76d393bb0e2b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.847900 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "93ee5017-b7fa-4c09-add3-76d393bb0e2b" (UID: "93ee5017-b7fa-4c09-add3-76d393bb0e2b"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.875562 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-config-data" (OuterVolumeSpecName: "config-data") pod "43d6700e-9065-436f-aa67-68590f89f57d" (UID: "43d6700e-9065-436f-aa67-68590f89f57d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.927219 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.927280 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ee5017-b7fa-4c09-add3-76d393bb0e2b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.927291 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:17 crc kubenswrapper[4796]: I1202 20:40:17.955874 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "43d6700e-9065-436f-aa67-68590f89f57d" (UID: "43d6700e-9065-436f-aa67-68590f89f57d"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.028408 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/43d6700e-9065-436f-aa67-68590f89f57d-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.034091 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.129923 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-run\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.129975 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-iscsi\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130019 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-config-data-custom\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130052 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-run" (OuterVolumeSpecName: "run") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130071 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-sys\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130125 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-sys" (OuterVolumeSpecName: "sys") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130129 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130163 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-nvme\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130200 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-config-data\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130236 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130307 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mddf5\" (UniqueName: \"kubernetes.io/projected/6c355ccd-843e-46c4-93d0-bd00e17867e8-kube-api-access-mddf5\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130382 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-lib-modules\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130426 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-dev\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130457 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-locks-brick\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130492 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130514 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-dev" (OuterVolumeSpecName: "dev") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130505 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-cert-memcached-mtls\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130555 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130587 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-scripts\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130632 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-machine-id\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130665 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-combined-ca-bundle\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130701 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-lib-cinder\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130774 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-locks-cinder\") pod \"6c355ccd-843e-46c4-93d0-bd00e17867e8\" (UID: \"6c355ccd-843e-46c4-93d0-bd00e17867e8\") " Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130869 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.130919 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.131009 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.131386 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.131424 4796 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.131442 4796 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.131460 4796 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-run\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.131477 4796 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-iscsi\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.131492 4796 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-sys\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.131506 4796 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-etc-nvme\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.131521 4796 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-lib-modules\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.131536 4796 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-dev\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.131552 4796 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6c355ccd-843e-46c4-93d0-bd00e17867e8-var-locks-brick\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.132905 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.133495 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c355ccd-843e-46c4-93d0-bd00e17867e8-kube-api-access-mddf5" (OuterVolumeSpecName: "kube-api-access-mddf5") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "kube-api-access-mddf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.134273 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-scripts" (OuterVolumeSpecName: "scripts") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.178065 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.230385 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-config-data" (OuterVolumeSpecName: "config-data") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.232669 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.232706 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mddf5\" (UniqueName: \"kubernetes.io/projected/6c355ccd-843e-46c4-93d0-bd00e17867e8-kube-api-access-mddf5\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.232721 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.232733 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.232744 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.285713 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "6c355ccd-843e-46c4-93d0-bd00e17867e8" (UID: "6c355ccd-843e-46c4-93d0-bd00e17867e8"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.334718 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6c355ccd-843e-46c4-93d0-bd00e17867e8-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.489957 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.542984 4796 generic.go:334] "Generic (PLEG): container finished" podID="6c355ccd-843e-46c4-93d0-bd00e17867e8" containerID="bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999" exitCode=0 Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.543029 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.543391 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"6c355ccd-843e-46c4-93d0-bd00e17867e8","Type":"ContainerDied","Data":"bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999"} Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.543452 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"6c355ccd-843e-46c4-93d0-bd00e17867e8","Type":"ContainerDied","Data":"c72dea865b15f9f80debfdfdd1bc92485569ec01a86833b87950eb32da758b9e"} Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.543477 4796 scope.go:117] "RemoveContainer" containerID="94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.545455 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"93ee5017-b7fa-4c09-add3-76d393bb0e2b","Type":"ContainerDied","Data":"486082df7d79379e3fa805e1ff9a265376c8c6989f8eeb1b591874d890802820"} Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.545615 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.548632 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"43d6700e-9065-436f-aa67-68590f89f57d","Type":"ContainerDied","Data":"ba8d60af41e6af675a0308a34266764e98a2077aca58e3cf1865dcf2cafb03fc"} Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.548717 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.565972 4796 scope.go:117] "RemoveContainer" containerID="bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.594146 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.603224 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.609954 4796 scope.go:117] "RemoveContainer" containerID="94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0" Dec 02 20:40:18 crc kubenswrapper[4796]: E1202 20:40:18.610517 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0\": container with ID starting with 94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0 not found: ID does not exist" containerID="94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.610558 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0"} err="failed to get container status \"94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0\": rpc error: code = NotFound desc = could not find container \"94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0\": container with ID starting with 94b9a4b4318620134b7e7c35d555717376508a7a4bffd37c1beeaa5d6af401b0 not found: ID does not exist" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.610583 4796 scope.go:117] "RemoveContainer" containerID="bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999" Dec 02 20:40:18 crc kubenswrapper[4796]: E1202 20:40:18.610865 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999\": container with ID starting with bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999 not found: ID does not exist" containerID="bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.610998 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999"} err="failed to get container status \"bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999\": rpc error: code = NotFound desc = could not find container \"bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999\": container with ID starting with bc73b5768b9ffc6e83cd0f0196b00c4b64c779767c336913681cc7cc0a7b7999 not found: ID does not exist" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.611099 4796 scope.go:117] "RemoveContainer" containerID="fd41b87285f93d26194d0bfa1a42aacc254df4deb468734218400b3c53fbf0f3" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.613972 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.622486 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.632094 4796 scope.go:117] "RemoveContainer" containerID="10cdff450040cf2e920bff80607829f00c868d8cbff434179b98562a59be0731" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.635121 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.642308 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.648693 4796 scope.go:117] "RemoveContainer" containerID="68eadbae36740b0287f189b80c3081837a7560316c787c83a00455c2c608d386" Dec 02 20:40:18 crc kubenswrapper[4796]: I1202 20:40:18.676670 4796 scope.go:117] "RemoveContainer" containerID="a258540b5af5fee92f2bf3511f9559eb722b0777a8562a8a73987c2845e6126a" Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.030086 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-create-hd95z"] Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.040557 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-create-hd95z"] Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.048931 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c"] Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.055199 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder4f01-account-delete-rd9rw"] Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.061367 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-4f01-account-create-update-zwh9c"] Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.067320 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder4f01-account-delete-rd9rw"] Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.275000 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d6700e-9065-436f-aa67-68590f89f57d" path="/var/lib/kubelet/pods/43d6700e-9065-436f-aa67-68590f89f57d/volumes" Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.275952 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456f8361-9b09-40dd-943c-90191091aef2" path="/var/lib/kubelet/pods/456f8361-9b09-40dd-943c-90191091aef2/volumes" Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.276767 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c355ccd-843e-46c4-93d0-bd00e17867e8" path="/var/lib/kubelet/pods/6c355ccd-843e-46c4-93d0-bd00e17867e8/volumes" Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.278280 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8108c599-5ca6-42d3-b1f5-1b86a40b4560" path="/var/lib/kubelet/pods/8108c599-5ca6-42d3-b1f5-1b86a40b4560/volumes" Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.279031 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ee5017-b7fa-4c09-add3-76d393bb0e2b" path="/var/lib/kubelet/pods/93ee5017-b7fa-4c09-add3-76d393bb0e2b/volumes" Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.279894 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79c74cf-8574-42e6-954b-42455083b729" path="/var/lib/kubelet/pods/c79c74cf-8574-42e6-954b-42455083b729/volumes" Dec 02 20:40:19 crc kubenswrapper[4796]: I1202 20:40:19.692653 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.574659 4796 generic.go:334] "Generic (PLEG): container finished" podID="6fb03211-0ce8-4dde-96ec-6692589cd895" containerID="f73d9e9a00b60d8a61dd714a5e0b8347b03577df9dba425911db7b5e0b3b0659" exitCode=0 Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.574700 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"6fb03211-0ce8-4dde-96ec-6692589cd895","Type":"ContainerDied","Data":"f73d9e9a00b60d8a61dd714a5e0b8347b03577df9dba425911db7b5e0b3b0659"} Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.574725 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"6fb03211-0ce8-4dde-96ec-6692589cd895","Type":"ContainerDied","Data":"991feba09951dc3221c3e54b8293530c07dd8d2386306151c398e20faa47d0ba"} Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.574736 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="991feba09951dc3221c3e54b8293530c07dd8d2386306151c398e20faa47d0ba" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.608456 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.680054 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgk4c\" (UniqueName: \"kubernetes.io/projected/6fb03211-0ce8-4dde-96ec-6692589cd895-kube-api-access-jgk4c\") pod \"6fb03211-0ce8-4dde-96ec-6692589cd895\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.681310 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-combined-ca-bundle\") pod \"6fb03211-0ce8-4dde-96ec-6692589cd895\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.681379 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-custom-prometheus-ca\") pod \"6fb03211-0ce8-4dde-96ec-6692589cd895\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.681406 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-cert-memcached-mtls\") pod \"6fb03211-0ce8-4dde-96ec-6692589cd895\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.681440 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-config-data\") pod \"6fb03211-0ce8-4dde-96ec-6692589cd895\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.681528 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb03211-0ce8-4dde-96ec-6692589cd895-logs\") pod \"6fb03211-0ce8-4dde-96ec-6692589cd895\" (UID: \"6fb03211-0ce8-4dde-96ec-6692589cd895\") " Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.682468 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb03211-0ce8-4dde-96ec-6692589cd895-logs" (OuterVolumeSpecName: "logs") pod "6fb03211-0ce8-4dde-96ec-6692589cd895" (UID: "6fb03211-0ce8-4dde-96ec-6692589cd895"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.695084 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb03211-0ce8-4dde-96ec-6692589cd895-kube-api-access-jgk4c" (OuterVolumeSpecName: "kube-api-access-jgk4c") pod "6fb03211-0ce8-4dde-96ec-6692589cd895" (UID: "6fb03211-0ce8-4dde-96ec-6692589cd895"). InnerVolumeSpecName "kube-api-access-jgk4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.725343 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "6fb03211-0ce8-4dde-96ec-6692589cd895" (UID: "6fb03211-0ce8-4dde-96ec-6692589cd895"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.735439 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fb03211-0ce8-4dde-96ec-6692589cd895" (UID: "6fb03211-0ce8-4dde-96ec-6692589cd895"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.772484 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "6fb03211-0ce8-4dde-96ec-6692589cd895" (UID: "6fb03211-0ce8-4dde-96ec-6692589cd895"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.779142 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-config-data" (OuterVolumeSpecName: "config-data") pod "6fb03211-0ce8-4dde-96ec-6692589cd895" (UID: "6fb03211-0ce8-4dde-96ec-6692589cd895"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.783004 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.783032 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.783043 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.783053 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb03211-0ce8-4dde-96ec-6692589cd895-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.783062 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb03211-0ce8-4dde-96ec-6692589cd895-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.783071 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgk4c\" (UniqueName: \"kubernetes.io/projected/6fb03211-0ce8-4dde-96ec-6692589cd895-kube-api-access-jgk4c\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:20 crc kubenswrapper[4796]: I1202 20:40:20.893471 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_6fb03211-0ce8-4dde-96ec-6692589cd895/watcher-decision-engine/0.log" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.590270 4796 generic.go:334] "Generic (PLEG): container finished" podID="b630568c-0328-4374-9115-8ba4633b36d9" containerID="f885e657b96e1bb5e03368bbd881c61f8a55926395c495a4cecd813bcb41028a" exitCode=0 Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.590580 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.590313 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b630568c-0328-4374-9115-8ba4633b36d9","Type":"ContainerDied","Data":"f885e657b96e1bb5e03368bbd881c61f8a55926395c495a4cecd813bcb41028a"} Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.623310 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.636271 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654245 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:21 crc kubenswrapper[4796]: E1202 20:40:21.654588 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ee5017-b7fa-4c09-add3-76d393bb0e2b" containerName="cinder-api" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654609 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ee5017-b7fa-4c09-add3-76d393bb0e2b" containerName="cinder-api" Dec 02 20:40:21 crc kubenswrapper[4796]: E1202 20:40:21.654620 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ee5017-b7fa-4c09-add3-76d393bb0e2b" containerName="cinder-api-log" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654627 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ee5017-b7fa-4c09-add3-76d393bb0e2b" containerName="cinder-api-log" Dec 02 20:40:21 crc kubenswrapper[4796]: E1202 20:40:21.654646 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb03211-0ce8-4dde-96ec-6692589cd895" containerName="watcher-decision-engine" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654654 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb03211-0ce8-4dde-96ec-6692589cd895" containerName="watcher-decision-engine" Dec 02 20:40:21 crc kubenswrapper[4796]: E1202 20:40:21.654668 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c355ccd-843e-46c4-93d0-bd00e17867e8" containerName="cinder-backup" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654675 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c355ccd-843e-46c4-93d0-bd00e17867e8" containerName="cinder-backup" Dec 02 20:40:21 crc kubenswrapper[4796]: E1202 20:40:21.654690 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8108c599-5ca6-42d3-b1f5-1b86a40b4560" containerName="mariadb-account-delete" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654708 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8108c599-5ca6-42d3-b1f5-1b86a40b4560" containerName="mariadb-account-delete" Dec 02 20:40:21 crc kubenswrapper[4796]: E1202 20:40:21.654727 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c355ccd-843e-46c4-93d0-bd00e17867e8" containerName="probe" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654732 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c355ccd-843e-46c4-93d0-bd00e17867e8" containerName="probe" Dec 02 20:40:21 crc kubenswrapper[4796]: E1202 20:40:21.654742 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d6700e-9065-436f-aa67-68590f89f57d" containerName="probe" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654748 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d6700e-9065-436f-aa67-68590f89f57d" containerName="probe" Dec 02 20:40:21 crc kubenswrapper[4796]: E1202 20:40:21.654762 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d6700e-9065-436f-aa67-68590f89f57d" containerName="cinder-scheduler" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654767 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d6700e-9065-436f-aa67-68590f89f57d" containerName="cinder-scheduler" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654911 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c355ccd-843e-46c4-93d0-bd00e17867e8" containerName="cinder-backup" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654922 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8108c599-5ca6-42d3-b1f5-1b86a40b4560" containerName="mariadb-account-delete" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654935 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d6700e-9065-436f-aa67-68590f89f57d" containerName="probe" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654943 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb03211-0ce8-4dde-96ec-6692589cd895" containerName="watcher-decision-engine" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654952 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c355ccd-843e-46c4-93d0-bd00e17867e8" containerName="probe" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654960 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ee5017-b7fa-4c09-add3-76d393bb0e2b" containerName="cinder-api-log" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654969 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ee5017-b7fa-4c09-add3-76d393bb0e2b" containerName="cinder-api" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.654978 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d6700e-9065-436f-aa67-68590f89f57d" containerName="cinder-scheduler" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.655538 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.658593 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.684706 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.697408 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.697493 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpml9\" (UniqueName: \"kubernetes.io/projected/8ccb0623-4014-43c8-afb2-76c128df28b6-kube-api-access-tpml9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.697512 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.697550 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.697596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.697624 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ccb0623-4014-43c8-afb2-76c128df28b6-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.744355 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.798949 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b630568c-0328-4374-9115-8ba4633b36d9-run-httpd\") pod \"b630568c-0328-4374-9115-8ba4633b36d9\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.799019 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-scripts\") pod \"b630568c-0328-4374-9115-8ba4633b36d9\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.799067 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-sg-core-conf-yaml\") pod \"b630568c-0328-4374-9115-8ba4633b36d9\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.799097 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-config-data\") pod \"b630568c-0328-4374-9115-8ba4633b36d9\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.799152 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b630568c-0328-4374-9115-8ba4633b36d9-log-httpd\") pod \"b630568c-0328-4374-9115-8ba4633b36d9\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.799242 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9r86\" (UniqueName: \"kubernetes.io/projected/b630568c-0328-4374-9115-8ba4633b36d9-kube-api-access-h9r86\") pod \"b630568c-0328-4374-9115-8ba4633b36d9\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.799975 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-ceilometer-tls-certs\") pod \"b630568c-0328-4374-9115-8ba4633b36d9\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.800006 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-combined-ca-bundle\") pod \"b630568c-0328-4374-9115-8ba4633b36d9\" (UID: \"b630568c-0328-4374-9115-8ba4633b36d9\") " Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.800274 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.799677 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b630568c-0328-4374-9115-8ba4633b36d9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b630568c-0328-4374-9115-8ba4633b36d9" (UID: "b630568c-0328-4374-9115-8ba4633b36d9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.800321 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpml9\" (UniqueName: \"kubernetes.io/projected/8ccb0623-4014-43c8-afb2-76c128df28b6-kube-api-access-tpml9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.799925 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b630568c-0328-4374-9115-8ba4633b36d9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b630568c-0328-4374-9115-8ba4633b36d9" (UID: "b630568c-0328-4374-9115-8ba4633b36d9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.800343 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.800459 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.800525 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.800557 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ccb0623-4014-43c8-afb2-76c128df28b6-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.800874 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b630568c-0328-4374-9115-8ba4633b36d9-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.800886 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b630568c-0328-4374-9115-8ba4633b36d9-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.801169 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ccb0623-4014-43c8-afb2-76c128df28b6-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.808078 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.808179 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b630568c-0328-4374-9115-8ba4633b36d9-kube-api-access-h9r86" (OuterVolumeSpecName: "kube-api-access-h9r86") pod "b630568c-0328-4374-9115-8ba4633b36d9" (UID: "b630568c-0328-4374-9115-8ba4633b36d9"). InnerVolumeSpecName "kube-api-access-h9r86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.808196 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.808663 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.808771 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-scripts" (OuterVolumeSpecName: "scripts") pod "b630568c-0328-4374-9115-8ba4633b36d9" (UID: "b630568c-0328-4374-9115-8ba4633b36d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.809814 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.841632 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b630568c-0328-4374-9115-8ba4633b36d9" (UID: "b630568c-0328-4374-9115-8ba4633b36d9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.850998 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpml9\" (UniqueName: \"kubernetes.io/projected/8ccb0623-4014-43c8-afb2-76c128df28b6-kube-api-access-tpml9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.883815 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b630568c-0328-4374-9115-8ba4633b36d9" (UID: "b630568c-0328-4374-9115-8ba4633b36d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.894829 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b630568c-0328-4374-9115-8ba4633b36d9" (UID: "b630568c-0328-4374-9115-8ba4633b36d9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.904247 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.904294 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.904306 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9r86\" (UniqueName: \"kubernetes.io/projected/b630568c-0328-4374-9115-8ba4633b36d9-kube-api-access-h9r86\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.904315 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.904325 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:21 crc kubenswrapper[4796]: I1202 20:40:21.910339 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-config-data" (OuterVolumeSpecName: "config-data") pod "b630568c-0328-4374-9115-8ba4633b36d9" (UID: "b630568c-0328-4374-9115-8ba4633b36d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.006082 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b630568c-0328-4374-9115-8ba4633b36d9-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.057649 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.264885 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:40:22 crc kubenswrapper[4796]: E1202 20:40:22.265362 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.601329 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:22 crc kubenswrapper[4796]: W1202 20:40:22.607077 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ccb0623_4014_43c8_afb2_76c128df28b6.slice/crio-cd40c8a5538880494714ce7bd70849ba1a54b71e5e498a2a10ad69f64f2de426 WatchSource:0}: Error finding container cd40c8a5538880494714ce7bd70849ba1a54b71e5e498a2a10ad69f64f2de426: Status 404 returned error can't find the container with id cd40c8a5538880494714ce7bd70849ba1a54b71e5e498a2a10ad69f64f2de426 Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.608078 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b630568c-0328-4374-9115-8ba4633b36d9","Type":"ContainerDied","Data":"45484b03e28148c56c541bf820fa8da2e13499321317a9bf40b723b1e4008327"} Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.608132 4796 scope.go:117] "RemoveContainer" containerID="507699ff02d6cab9df8266a4c9169940352a204f4bb96cad363a7633c96e6961" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.608330 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.835013 4796 scope.go:117] "RemoveContainer" containerID="ce5daf3697504ebdf29a52e70e51dd8ea5e18f0825c6700427dd2a3751e1d621" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.875994 4796 scope.go:117] "RemoveContainer" containerID="f885e657b96e1bb5e03368bbd881c61f8a55926395c495a4cecd813bcb41028a" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.887371 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.894382 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.912860 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.916562 4796 scope.go:117] "RemoveContainer" containerID="ce50f47769ee4617aeb5d5476925c4d69eccf9c166f910180d59ee35852f47bf" Dec 02 20:40:22 crc kubenswrapper[4796]: E1202 20:40:22.917832 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="sg-core" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.917865 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="sg-core" Dec 02 20:40:22 crc kubenswrapper[4796]: E1202 20:40:22.917879 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="proxy-httpd" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.917886 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="proxy-httpd" Dec 02 20:40:22 crc kubenswrapper[4796]: E1202 20:40:22.917905 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="ceilometer-central-agent" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.917911 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="ceilometer-central-agent" Dec 02 20:40:22 crc kubenswrapper[4796]: E1202 20:40:22.917928 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="ceilometer-notification-agent" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.917934 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="ceilometer-notification-agent" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.918097 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="ceilometer-central-agent" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.918117 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="ceilometer-notification-agent" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.918134 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="proxy-httpd" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.918148 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b630568c-0328-4374-9115-8ba4633b36d9" containerName="sg-core" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.921415 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.923811 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.924079 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.924282 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:40:22 crc kubenswrapper[4796]: I1202 20:40:22.935467 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.020405 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-run-httpd\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.020452 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-config-data\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.020588 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.020611 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbjw\" (UniqueName: \"kubernetes.io/projected/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-kube-api-access-qhbjw\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.020659 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-log-httpd\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.020687 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.020733 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-scripts\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.020758 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.122474 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.122521 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhbjw\" (UniqueName: \"kubernetes.io/projected/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-kube-api-access-qhbjw\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.122561 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-log-httpd\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.122592 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.122607 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-scripts\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.122629 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.123590 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-run-httpd\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.123901 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-run-httpd\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.123947 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-config-data\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.125648 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-log-httpd\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.127451 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.127558 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-scripts\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.127833 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.128530 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-config-data\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.128653 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.140939 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhbjw\" (UniqueName: \"kubernetes.io/projected/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-kube-api-access-qhbjw\") pod \"ceilometer-0\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.244567 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.275186 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb03211-0ce8-4dde-96ec-6692589cd895" path="/var/lib/kubelet/pods/6fb03211-0ce8-4dde-96ec-6692589cd895/volumes" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.276072 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b630568c-0328-4374-9115-8ba4633b36d9" path="/var/lib/kubelet/pods/b630568c-0328-4374-9115-8ba4633b36d9/volumes" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.618695 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8ccb0623-4014-43c8-afb2-76c128df28b6","Type":"ContainerStarted","Data":"08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe"} Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.619017 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8ccb0623-4014-43c8-afb2-76c128df28b6","Type":"ContainerStarted","Data":"cd40c8a5538880494714ce7bd70849ba1a54b71e5e498a2a10ad69f64f2de426"} Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.643900 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.643861684 podStartE2EDuration="2.643861684s" podCreationTimestamp="2025-12-02 20:40:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:40:23.638494985 +0000 UTC m=+1706.641870519" watchObservedRunningTime="2025-12-02 20:40:23.643861684 +0000 UTC m=+1706.647237228" Dec 02 20:40:23 crc kubenswrapper[4796]: I1202 20:40:23.737522 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:40:23 crc kubenswrapper[4796]: W1202 20:40:23.744882 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd43218a_5ff0_49f3_bfbf_d2943c1214c4.slice/crio-58d8d6261feb64b0130791bf7ee3d509a3bd30374e2ab0151e2f22d189610ffc WatchSource:0}: Error finding container 58d8d6261feb64b0130791bf7ee3d509a3bd30374e2ab0151e2f22d189610ffc: Status 404 returned error can't find the container with id 58d8d6261feb64b0130791bf7ee3d509a3bd30374e2ab0151e2f22d189610ffc Dec 02 20:40:24 crc kubenswrapper[4796]: I1202 20:40:24.429375 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_8ccb0623-4014-43c8-afb2-76c128df28b6/watcher-decision-engine/0.log" Dec 02 20:40:24 crc kubenswrapper[4796]: I1202 20:40:24.631520 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd43218a-5ff0-49f3-bfbf-d2943c1214c4","Type":"ContainerStarted","Data":"f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691"} Dec 02 20:40:24 crc kubenswrapper[4796]: I1202 20:40:24.632125 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd43218a-5ff0-49f3-bfbf-d2943c1214c4","Type":"ContainerStarted","Data":"58d8d6261feb64b0130791bf7ee3d509a3bd30374e2ab0151e2f22d189610ffc"} Dec 02 20:40:25 crc kubenswrapper[4796]: I1202 20:40:25.615561 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_8ccb0623-4014-43c8-afb2-76c128df28b6/watcher-decision-engine/0.log" Dec 02 20:40:25 crc kubenswrapper[4796]: I1202 20:40:25.645556 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd43218a-5ff0-49f3-bfbf-d2943c1214c4","Type":"ContainerStarted","Data":"2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b"} Dec 02 20:40:26 crc kubenswrapper[4796]: I1202 20:40:26.657197 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd43218a-5ff0-49f3-bfbf-d2943c1214c4","Type":"ContainerStarted","Data":"7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d"} Dec 02 20:40:26 crc kubenswrapper[4796]: I1202 20:40:26.820758 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_8ccb0623-4014-43c8-afb2-76c128df28b6/watcher-decision-engine/0.log" Dec 02 20:40:27 crc kubenswrapper[4796]: I1202 20:40:27.039127 4796 scope.go:117] "RemoveContainer" containerID="29db3627665f49b74aee13a1cf921981aefd405278d01f25c70845cd5b246bce" Dec 02 20:40:27 crc kubenswrapper[4796]: I1202 20:40:27.065877 4796 scope.go:117] "RemoveContainer" containerID="2d07824fb9033fb5945a1d9f5a408ae5f38afa3c92f14ff7c6117b16db4f5a45" Dec 02 20:40:27 crc kubenswrapper[4796]: I1202 20:40:27.138633 4796 scope.go:117] "RemoveContainer" containerID="8488b8338e1ae62ca80549a2127cb67ac6dbf8112311d7051c78acf4f9a7dceb" Dec 02 20:40:27 crc kubenswrapper[4796]: I1202 20:40:27.186246 4796 scope.go:117] "RemoveContainer" containerID="f9545693076834672c6f55810f5186b24f6eebbe1beef49b672deb1d8bf2932f" Dec 02 20:40:27 crc kubenswrapper[4796]: I1202 20:40:27.213916 4796 scope.go:117] "RemoveContainer" containerID="81ce03f29d50509e8e84d17d871ccc0e267c33f9774abf2b183fac33dbb8a41b" Dec 02 20:40:28 crc kubenswrapper[4796]: I1202 20:40:28.008668 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_8ccb0623-4014-43c8-afb2-76c128df28b6/watcher-decision-engine/0.log" Dec 02 20:40:28 crc kubenswrapper[4796]: I1202 20:40:28.679039 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd43218a-5ff0-49f3-bfbf-d2943c1214c4","Type":"ContainerStarted","Data":"99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca"} Dec 02 20:40:28 crc kubenswrapper[4796]: I1202 20:40:28.679354 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:28 crc kubenswrapper[4796]: I1202 20:40:28.715123 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.944590715 podStartE2EDuration="6.715097452s" podCreationTimestamp="2025-12-02 20:40:22 +0000 UTC" firstStartedPulling="2025-12-02 20:40:23.74892826 +0000 UTC m=+1706.752303814" lastFinishedPulling="2025-12-02 20:40:27.519435017 +0000 UTC m=+1710.522810551" observedRunningTime="2025-12-02 20:40:28.714394055 +0000 UTC m=+1711.717769629" watchObservedRunningTime="2025-12-02 20:40:28.715097452 +0000 UTC m=+1711.718472986" Dec 02 20:40:29 crc kubenswrapper[4796]: I1202 20:40:29.200603 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_8ccb0623-4014-43c8-afb2-76c128df28b6/watcher-decision-engine/0.log" Dec 02 20:40:30 crc kubenswrapper[4796]: I1202 20:40:30.415804 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_8ccb0623-4014-43c8-afb2-76c128df28b6/watcher-decision-engine/0.log" Dec 02 20:40:31 crc kubenswrapper[4796]: I1202 20:40:31.634877 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_8ccb0623-4014-43c8-afb2-76c128df28b6/watcher-decision-engine/0.log" Dec 02 20:40:32 crc kubenswrapper[4796]: I1202 20:40:32.058441 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:32 crc kubenswrapper[4796]: I1202 20:40:32.091113 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:32 crc kubenswrapper[4796]: I1202 20:40:32.720787 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:32 crc kubenswrapper[4796]: I1202 20:40:32.748904 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:32 crc kubenswrapper[4796]: I1202 20:40:32.814791 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_8ccb0623-4014-43c8-afb2-76c128df28b6/watcher-decision-engine/0.log" Dec 02 20:40:33 crc kubenswrapper[4796]: I1202 20:40:33.265350 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:40:33 crc kubenswrapper[4796]: E1202 20:40:33.265532 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:40:33 crc kubenswrapper[4796]: I1202 20:40:33.996565 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_8ccb0623-4014-43c8-afb2-76c128df28b6/watcher-decision-engine/0.log" Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.109470 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks"] Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.116058 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-mr4ks"] Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.161478 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchercb37-account-delete-vmxdg"] Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.162718 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.181078 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchercb37-account-delete-vmxdg"] Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.201104 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.201320 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="f704b1c0-8f43-442d-85fd-1ed65c91ac3a" containerName="watcher-applier" containerID="cri-o://2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf" gracePeriod=30 Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.281241 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c46c46-556a-4881-b771-8e7e99f748b0-operator-scripts\") pod \"watchercb37-account-delete-vmxdg\" (UID: \"15c46c46-556a-4881-b771-8e7e99f748b0\") " pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.281484 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjjc\" (UniqueName: \"kubernetes.io/projected/15c46c46-556a-4881-b771-8e7e99f748b0-kube-api-access-jrjjc\") pod \"watchercb37-account-delete-vmxdg\" (UID: \"15c46c46-556a-4881-b771-8e7e99f748b0\") " pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.383053 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c46c46-556a-4881-b771-8e7e99f748b0-operator-scripts\") pod \"watchercb37-account-delete-vmxdg\" (UID: \"15c46c46-556a-4881-b771-8e7e99f748b0\") " pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.383141 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrjjc\" (UniqueName: \"kubernetes.io/projected/15c46c46-556a-4881-b771-8e7e99f748b0-kube-api-access-jrjjc\") pod \"watchercb37-account-delete-vmxdg\" (UID: \"15c46c46-556a-4881-b771-8e7e99f748b0\") " pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.383849 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c46c46-556a-4881-b771-8e7e99f748b0-operator-scripts\") pod \"watchercb37-account-delete-vmxdg\" (UID: \"15c46c46-556a-4881-b771-8e7e99f748b0\") " pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.392304 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.412679 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.412887 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f393f072-a05c-4fcb-a197-434c890e5dd6" containerName="watcher-kuttl-api-log" containerID="cri-o://e82a697072cd554df539c6e1c4ab57711af2c8c339863bb7980760c57eccec5e" gracePeriod=30 Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.413003 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f393f072-a05c-4fcb-a197-434c890e5dd6" containerName="watcher-api" containerID="cri-o://17f6092877c3089abd982c52e8ff604ec306860fa444c1b5984717e69b860aec" gracePeriod=30 Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.429135 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrjjc\" (UniqueName: \"kubernetes.io/projected/15c46c46-556a-4881-b771-8e7e99f748b0-kube-api-access-jrjjc\") pod \"watchercb37-account-delete-vmxdg\" (UID: \"15c46c46-556a-4881-b771-8e7e99f748b0\") " pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.550101 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.742539 4796 generic.go:334] "Generic (PLEG): container finished" podID="f393f072-a05c-4fcb-a197-434c890e5dd6" containerID="e82a697072cd554df539c6e1c4ab57711af2c8c339863bb7980760c57eccec5e" exitCode=143 Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.743005 4796 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-b5th9\" not found" Dec 02 20:40:34 crc kubenswrapper[4796]: I1202 20:40:34.743566 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f393f072-a05c-4fcb-a197-434c890e5dd6","Type":"ContainerDied","Data":"e82a697072cd554df539c6e1c4ab57711af2c8c339863bb7980760c57eccec5e"} Dec 02 20:40:34 crc kubenswrapper[4796]: E1202 20:40:34.789929 4796 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:40:34 crc kubenswrapper[4796]: E1202 20:40:34.790001 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data podName:8ccb0623-4014-43c8-afb2-76c128df28b6 nodeName:}" failed. No retries permitted until 2025-12-02 20:40:35.289981079 +0000 UTC m=+1718.293356613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "8ccb0623-4014-43c8-afb2-76c128df28b6") : secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:40:35 crc kubenswrapper[4796]: I1202 20:40:35.085099 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchercb37-account-delete-vmxdg"] Dec 02 20:40:35 crc kubenswrapper[4796]: I1202 20:40:35.276926 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e4eaa0-0211-4ce2-8b05-afb240242238" path="/var/lib/kubelet/pods/c1e4eaa0-0211-4ce2-8b05-afb240242238/volumes" Dec 02 20:40:35 crc kubenswrapper[4796]: E1202 20:40:35.302601 4796 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:40:35 crc kubenswrapper[4796]: E1202 20:40:35.302693 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data podName:8ccb0623-4014-43c8-afb2-76c128df28b6 nodeName:}" failed. No retries permitted until 2025-12-02 20:40:36.302675365 +0000 UTC m=+1719.306050889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "8ccb0623-4014-43c8-afb2-76c128df28b6") : secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:40:35 crc kubenswrapper[4796]: I1202 20:40:35.752146 4796 generic.go:334] "Generic (PLEG): container finished" podID="f393f072-a05c-4fcb-a197-434c890e5dd6" containerID="17f6092877c3089abd982c52e8ff604ec306860fa444c1b5984717e69b860aec" exitCode=0 Dec 02 20:40:35 crc kubenswrapper[4796]: I1202 20:40:35.752225 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f393f072-a05c-4fcb-a197-434c890e5dd6","Type":"ContainerDied","Data":"17f6092877c3089abd982c52e8ff604ec306860fa444c1b5984717e69b860aec"} Dec 02 20:40:35 crc kubenswrapper[4796]: I1202 20:40:35.753576 4796 generic.go:334] "Generic (PLEG): container finished" podID="15c46c46-556a-4881-b771-8e7e99f748b0" containerID="37e29902cdc4dac45535ca62747b39587bfef777aab9d293c1381ad5c7d876c1" exitCode=0 Dec 02 20:40:35 crc kubenswrapper[4796]: I1202 20:40:35.753622 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" event={"ID":"15c46c46-556a-4881-b771-8e7e99f748b0","Type":"ContainerDied","Data":"37e29902cdc4dac45535ca62747b39587bfef777aab9d293c1381ad5c7d876c1"} Dec 02 20:40:35 crc kubenswrapper[4796]: I1202 20:40:35.753671 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" event={"ID":"15c46c46-556a-4881-b771-8e7e99f748b0","Type":"ContainerStarted","Data":"2cc1f9d3e14035229c8810c2d1241aff2b8346a850cf0032d67017e18c54545e"} Dec 02 20:40:35 crc kubenswrapper[4796]: I1202 20:40:35.753929 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="8ccb0623-4014-43c8-afb2-76c128df28b6" containerName="watcher-decision-engine" containerID="cri-o://08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe" gracePeriod=30 Dec 02 20:40:35 crc kubenswrapper[4796]: E1202 20:40:35.997422 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:40:35 crc kubenswrapper[4796]: E1202 20:40:35.998838 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:40:36 crc kubenswrapper[4796]: E1202 20:40:36.010535 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:40:36 crc kubenswrapper[4796]: E1202 20:40:36.010658 4796 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="f704b1c0-8f43-442d-85fd-1ed65c91ac3a" containerName="watcher-applier" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.048718 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.114403 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f393f072-a05c-4fcb-a197-434c890e5dd6-logs\") pod \"f393f072-a05c-4fcb-a197-434c890e5dd6\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.114460 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98s6w\" (UniqueName: \"kubernetes.io/projected/f393f072-a05c-4fcb-a197-434c890e5dd6-kube-api-access-98s6w\") pod \"f393f072-a05c-4fcb-a197-434c890e5dd6\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.114524 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-custom-prometheus-ca\") pod \"f393f072-a05c-4fcb-a197-434c890e5dd6\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.114550 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-cert-memcached-mtls\") pod \"f393f072-a05c-4fcb-a197-434c890e5dd6\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.114622 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-config-data\") pod \"f393f072-a05c-4fcb-a197-434c890e5dd6\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.114689 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-combined-ca-bundle\") pod \"f393f072-a05c-4fcb-a197-434c890e5dd6\" (UID: \"f393f072-a05c-4fcb-a197-434c890e5dd6\") " Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.115117 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f393f072-a05c-4fcb-a197-434c890e5dd6-logs" (OuterVolumeSpecName: "logs") pod "f393f072-a05c-4fcb-a197-434c890e5dd6" (UID: "f393f072-a05c-4fcb-a197-434c890e5dd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.123934 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f393f072-a05c-4fcb-a197-434c890e5dd6-kube-api-access-98s6w" (OuterVolumeSpecName: "kube-api-access-98s6w") pod "f393f072-a05c-4fcb-a197-434c890e5dd6" (UID: "f393f072-a05c-4fcb-a197-434c890e5dd6"). InnerVolumeSpecName "kube-api-access-98s6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.144056 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f393f072-a05c-4fcb-a197-434c890e5dd6" (UID: "f393f072-a05c-4fcb-a197-434c890e5dd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.146169 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f393f072-a05c-4fcb-a197-434c890e5dd6" (UID: "f393f072-a05c-4fcb-a197-434c890e5dd6"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.172304 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-config-data" (OuterVolumeSpecName: "config-data") pod "f393f072-a05c-4fcb-a197-434c890e5dd6" (UID: "f393f072-a05c-4fcb-a197-434c890e5dd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.181435 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f393f072-a05c-4fcb-a197-434c890e5dd6" (UID: "f393f072-a05c-4fcb-a197-434c890e5dd6"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.216623 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.216662 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.216672 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.216681 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f393f072-a05c-4fcb-a197-434c890e5dd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.216693 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f393f072-a05c-4fcb-a197-434c890e5dd6-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.216702 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98s6w\" (UniqueName: \"kubernetes.io/projected/f393f072-a05c-4fcb-a197-434c890e5dd6-kube-api-access-98s6w\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:36 crc kubenswrapper[4796]: E1202 20:40:36.318586 4796 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:40:36 crc kubenswrapper[4796]: E1202 20:40:36.318654 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data podName:8ccb0623-4014-43c8-afb2-76c128df28b6 nodeName:}" failed. No retries permitted until 2025-12-02 20:40:38.31863695 +0000 UTC m=+1721.322012484 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "8ccb0623-4014-43c8-afb2-76c128df28b6") : secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.763598 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.763592 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f393f072-a05c-4fcb-a197-434c890e5dd6","Type":"ContainerDied","Data":"2fcb0265f65629d4a7b8833f596110ab2682030ecb74dab5d537628d60532e78"} Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.763764 4796 scope.go:117] "RemoveContainer" containerID="17f6092877c3089abd982c52e8ff604ec306860fa444c1b5984717e69b860aec" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.788485 4796 scope.go:117] "RemoveContainer" containerID="e82a697072cd554df539c6e1c4ab57711af2c8c339863bb7980760c57eccec5e" Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.804121 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.814018 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.961488 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.961758 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="ceilometer-central-agent" containerID="cri-o://f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691" gracePeriod=30 Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.962131 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="proxy-httpd" containerID="cri-o://99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca" gracePeriod=30 Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.962182 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="sg-core" containerID="cri-o://7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d" gracePeriod=30 Dec 02 20:40:36 crc kubenswrapper[4796]: I1202 20:40:36.962461 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="ceilometer-notification-agent" containerID="cri-o://2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b" gracePeriod=30 Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.181372 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.230711 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrjjc\" (UniqueName: \"kubernetes.io/projected/15c46c46-556a-4881-b771-8e7e99f748b0-kube-api-access-jrjjc\") pod \"15c46c46-556a-4881-b771-8e7e99f748b0\" (UID: \"15c46c46-556a-4881-b771-8e7e99f748b0\") " Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.230785 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c46c46-556a-4881-b771-8e7e99f748b0-operator-scripts\") pod \"15c46c46-556a-4881-b771-8e7e99f748b0\" (UID: \"15c46c46-556a-4881-b771-8e7e99f748b0\") " Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.232635 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15c46c46-556a-4881-b771-8e7e99f748b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15c46c46-556a-4881-b771-8e7e99f748b0" (UID: "15c46c46-556a-4881-b771-8e7e99f748b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.249390 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c46c46-556a-4881-b771-8e7e99f748b0-kube-api-access-jrjjc" (OuterVolumeSpecName: "kube-api-access-jrjjc") pod "15c46c46-556a-4881-b771-8e7e99f748b0" (UID: "15c46c46-556a-4881-b771-8e7e99f748b0"). InnerVolumeSpecName "kube-api-access-jrjjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.279596 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f393f072-a05c-4fcb-a197-434c890e5dd6" path="/var/lib/kubelet/pods/f393f072-a05c-4fcb-a197-434c890e5dd6/volumes" Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.332582 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c46c46-556a-4881-b771-8e7e99f748b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.332623 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrjjc\" (UniqueName: \"kubernetes.io/projected/15c46c46-556a-4881-b771-8e7e99f748b0-kube-api-access-jrjjc\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.773229 4796 generic.go:334] "Generic (PLEG): container finished" podID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerID="99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca" exitCode=0 Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.773565 4796 generic.go:334] "Generic (PLEG): container finished" podID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerID="7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d" exitCode=2 Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.773578 4796 generic.go:334] "Generic (PLEG): container finished" podID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerID="f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691" exitCode=0 Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.773285 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd43218a-5ff0-49f3-bfbf-d2943c1214c4","Type":"ContainerDied","Data":"99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca"} Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.773653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd43218a-5ff0-49f3-bfbf-d2943c1214c4","Type":"ContainerDied","Data":"7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d"} Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.773668 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd43218a-5ff0-49f3-bfbf-d2943c1214c4","Type":"ContainerDied","Data":"f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691"} Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.775811 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" event={"ID":"15c46c46-556a-4881-b771-8e7e99f748b0","Type":"ContainerDied","Data":"2cc1f9d3e14035229c8810c2d1241aff2b8346a850cf0032d67017e18c54545e"} Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.775851 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cc1f9d3e14035229c8810c2d1241aff2b8346a850cf0032d67017e18c54545e" Dec 02 20:40:37 crc kubenswrapper[4796]: I1202 20:40:37.775868 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchercb37-account-delete-vmxdg" Dec 02 20:40:38 crc kubenswrapper[4796]: E1202 20:40:38.347577 4796 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:40:38 crc kubenswrapper[4796]: E1202 20:40:38.347634 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data podName:8ccb0623-4014-43c8-afb2-76c128df28b6 nodeName:}" failed. No retries permitted until 2025-12-02 20:40:42.347620289 +0000 UTC m=+1725.350995823 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "8ccb0623-4014-43c8-afb2-76c128df28b6") : secret "watcher-kuttl-decision-engine-config-data" not found Dec 02 20:40:39 crc kubenswrapper[4796]: I1202 20:40:39.198863 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jgr9z"] Dec 02 20:40:39 crc kubenswrapper[4796]: I1202 20:40:39.211006 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jgr9z"] Dec 02 20:40:39 crc kubenswrapper[4796]: I1202 20:40:39.225636 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchercb37-account-delete-vmxdg"] Dec 02 20:40:39 crc kubenswrapper[4796]: I1202 20:40:39.235635 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchercb37-account-delete-vmxdg"] Dec 02 20:40:39 crc kubenswrapper[4796]: I1202 20:40:39.244693 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9"] Dec 02 20:40:39 crc kubenswrapper[4796]: I1202 20:40:39.251103 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-cb37-account-create-update-xf2w9"] Dec 02 20:40:39 crc kubenswrapper[4796]: I1202 20:40:39.277084 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c46c46-556a-4881-b771-8e7e99f748b0" path="/var/lib/kubelet/pods/15c46c46-556a-4881-b771-8e7e99f748b0/volumes" Dec 02 20:40:39 crc kubenswrapper[4796]: I1202 20:40:39.277962 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d81d37-8e9d-47e1-b1de-415f2c7a5024" path="/var/lib/kubelet/pods/94d81d37-8e9d-47e1-b1de-415f2c7a5024/volumes" Dec 02 20:40:39 crc kubenswrapper[4796]: I1202 20:40:39.278982 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c19eaf38-ecbd-469a-a0b3-78028623baac" path="/var/lib/kubelet/pods/c19eaf38-ecbd-469a-a0b3-78028623baac/volumes" Dec 02 20:40:39 crc kubenswrapper[4796]: E1202 20:40:39.391300 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf704b1c0_8f43_442d_85fd_1ed65c91ac3a.slice/crio-2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf.scope\": RecentStats: unable to find data in memory cache]" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.368300 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.389898 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fj7b\" (UniqueName: \"kubernetes.io/projected/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-kube-api-access-8fj7b\") pod \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.390050 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-cert-memcached-mtls\") pod \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.390080 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-combined-ca-bundle\") pod \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.390122 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-logs\") pod \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.390168 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-config-data\") pod \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\" (UID: \"f704b1c0-8f43-442d-85fd-1ed65c91ac3a\") " Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.390637 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-logs" (OuterVolumeSpecName: "logs") pod "f704b1c0-8f43-442d-85fd-1ed65c91ac3a" (UID: "f704b1c0-8f43-442d-85fd-1ed65c91ac3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.390955 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.420428 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-kube-api-access-8fj7b" (OuterVolumeSpecName: "kube-api-access-8fj7b") pod "f704b1c0-8f43-442d-85fd-1ed65c91ac3a" (UID: "f704b1c0-8f43-442d-85fd-1ed65c91ac3a"). InnerVolumeSpecName "kube-api-access-8fj7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.434518 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f704b1c0-8f43-442d-85fd-1ed65c91ac3a" (UID: "f704b1c0-8f43-442d-85fd-1ed65c91ac3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.473564 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f704b1c0-8f43-442d-85fd-1ed65c91ac3a" (UID: "f704b1c0-8f43-442d-85fd-1ed65c91ac3a"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.492887 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.492922 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.492933 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fj7b\" (UniqueName: \"kubernetes.io/projected/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-kube-api-access-8fj7b\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.496608 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-config-data" (OuterVolumeSpecName: "config-data") pod "f704b1c0-8f43-442d-85fd-1ed65c91ac3a" (UID: "f704b1c0-8f43-442d-85fd-1ed65c91ac3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.553151 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.593949 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-cert-memcached-mtls\") pod \"8ccb0623-4014-43c8-afb2-76c128df28b6\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.594532 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-combined-ca-bundle\") pod \"8ccb0623-4014-43c8-afb2-76c128df28b6\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.594717 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data\") pod \"8ccb0623-4014-43c8-afb2-76c128df28b6\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.594817 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpml9\" (UniqueName: \"kubernetes.io/projected/8ccb0623-4014-43c8-afb2-76c128df28b6-kube-api-access-tpml9\") pod \"8ccb0623-4014-43c8-afb2-76c128df28b6\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.594900 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ccb0623-4014-43c8-afb2-76c128df28b6-logs\") pod \"8ccb0623-4014-43c8-afb2-76c128df28b6\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.594997 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-custom-prometheus-ca\") pod \"8ccb0623-4014-43c8-afb2-76c128df28b6\" (UID: \"8ccb0623-4014-43c8-afb2-76c128df28b6\") " Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.595376 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f704b1c0-8f43-442d-85fd-1ed65c91ac3a-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.598770 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ccb0623-4014-43c8-afb2-76c128df28b6-logs" (OuterVolumeSpecName: "logs") pod "8ccb0623-4014-43c8-afb2-76c128df28b6" (UID: "8ccb0623-4014-43c8-afb2-76c128df28b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.601274 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccb0623-4014-43c8-afb2-76c128df28b6-kube-api-access-tpml9" (OuterVolumeSpecName: "kube-api-access-tpml9") pod "8ccb0623-4014-43c8-afb2-76c128df28b6" (UID: "8ccb0623-4014-43c8-afb2-76c128df28b6"). InnerVolumeSpecName "kube-api-access-tpml9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.628675 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ccb0623-4014-43c8-afb2-76c128df28b6" (UID: "8ccb0623-4014-43c8-afb2-76c128df28b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.630790 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8ccb0623-4014-43c8-afb2-76c128df28b6" (UID: "8ccb0623-4014-43c8-afb2-76c128df28b6"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.662120 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "8ccb0623-4014-43c8-afb2-76c128df28b6" (UID: "8ccb0623-4014-43c8-afb2-76c128df28b6"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.664430 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data" (OuterVolumeSpecName: "config-data") pod "8ccb0623-4014-43c8-afb2-76c128df28b6" (UID: "8ccb0623-4014-43c8-afb2-76c128df28b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.696701 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.696734 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.696743 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.696754 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpml9\" (UniqueName: \"kubernetes.io/projected/8ccb0623-4014-43c8-afb2-76c128df28b6-kube-api-access-tpml9\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.696767 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ccb0623-4014-43c8-afb2-76c128df28b6-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.696776 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8ccb0623-4014-43c8-afb2-76c128df28b6-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.808781 4796 generic.go:334] "Generic (PLEG): container finished" podID="8ccb0623-4014-43c8-afb2-76c128df28b6" containerID="08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe" exitCode=0 Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.808820 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.808868 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8ccb0623-4014-43c8-afb2-76c128df28b6","Type":"ContainerDied","Data":"08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe"} Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.808897 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8ccb0623-4014-43c8-afb2-76c128df28b6","Type":"ContainerDied","Data":"cd40c8a5538880494714ce7bd70849ba1a54b71e5e498a2a10ad69f64f2de426"} Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.808916 4796 scope.go:117] "RemoveContainer" containerID="08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.810418 4796 generic.go:334] "Generic (PLEG): container finished" podID="f704b1c0-8f43-442d-85fd-1ed65c91ac3a" containerID="2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf" exitCode=0 Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.810468 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.810552 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f704b1c0-8f43-442d-85fd-1ed65c91ac3a","Type":"ContainerDied","Data":"2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf"} Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.810650 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f704b1c0-8f43-442d-85fd-1ed65c91ac3a","Type":"ContainerDied","Data":"40a10f1cf7069a86b7a3c6319988bf928a10c3cd0913a322b6bfa04edcf0fcdd"} Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.851610 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.855085 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.856447 4796 scope.go:117] "RemoveContainer" containerID="08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe" Dec 02 20:40:40 crc kubenswrapper[4796]: E1202 20:40:40.856902 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe\": container with ID starting with 08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe not found: ID does not exist" containerID="08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.856932 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe"} err="failed to get container status \"08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe\": rpc error: code = NotFound desc = could not find container \"08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe\": container with ID starting with 08ee217ad9a69939bc9e504340da7683be605f83658ec4879b2460b90ce695fe not found: ID does not exist" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.856951 4796 scope.go:117] "RemoveContainer" containerID="2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.874894 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.881467 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.883102 4796 scope.go:117] "RemoveContainer" containerID="2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf" Dec 02 20:40:40 crc kubenswrapper[4796]: E1202 20:40:40.883452 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf\": container with ID starting with 2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf not found: ID does not exist" containerID="2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.883540 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf"} err="failed to get container status \"2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf\": rpc error: code = NotFound desc = could not find container \"2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf\": container with ID starting with 2fabd26e24c472ff082e3ef7812954c02df1fce24524e9890c39ae7f2ca85ecf not found: ID does not exist" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.899084 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f393f072-a05c-4fcb-a197-434c890e5dd6" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.189:9322/\": dial tcp 10.217.0.189:9322: i/o timeout (Client.Timeout exceeded while awaiting headers)" Dec 02 20:40:40 crc kubenswrapper[4796]: I1202 20:40:40.899128 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f393f072-a05c-4fcb-a197-434c890e5dd6" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.189:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.274695 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ccb0623-4014-43c8-afb2-76c128df28b6" path="/var/lib/kubelet/pods/8ccb0623-4014-43c8-afb2-76c128df28b6/volumes" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.275219 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f704b1c0-8f43-442d-85fd-1ed65c91ac3a" path="/var/lib/kubelet/pods/f704b1c0-8f43-442d-85fd-1ed65c91ac3a/volumes" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.278225 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.306387 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-combined-ca-bundle\") pod \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.306557 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhbjw\" (UniqueName: \"kubernetes.io/projected/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-kube-api-access-qhbjw\") pod \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.306640 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-log-httpd\") pod \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.306699 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-config-data\") pod \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.306724 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-run-httpd\") pod \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.306748 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-scripts\") pod \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.306769 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-sg-core-conf-yaml\") pod \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.306844 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-ceilometer-tls-certs\") pod \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\" (UID: \"dd43218a-5ff0-49f3-bfbf-d2943c1214c4\") " Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.307151 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dd43218a-5ff0-49f3-bfbf-d2943c1214c4" (UID: "dd43218a-5ff0-49f3-bfbf-d2943c1214c4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.307485 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dd43218a-5ff0-49f3-bfbf-d2943c1214c4" (UID: "dd43218a-5ff0-49f3-bfbf-d2943c1214c4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.321418 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-scripts" (OuterVolumeSpecName: "scripts") pod "dd43218a-5ff0-49f3-bfbf-d2943c1214c4" (UID: "dd43218a-5ff0-49f3-bfbf-d2943c1214c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.321669 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-kube-api-access-qhbjw" (OuterVolumeSpecName: "kube-api-access-qhbjw") pod "dd43218a-5ff0-49f3-bfbf-d2943c1214c4" (UID: "dd43218a-5ff0-49f3-bfbf-d2943c1214c4"). InnerVolumeSpecName "kube-api-access-qhbjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.329582 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dd43218a-5ff0-49f3-bfbf-d2943c1214c4" (UID: "dd43218a-5ff0-49f3-bfbf-d2943c1214c4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.369649 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "dd43218a-5ff0-49f3-bfbf-d2943c1214c4" (UID: "dd43218a-5ff0-49f3-bfbf-d2943c1214c4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.374670 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd43218a-5ff0-49f3-bfbf-d2943c1214c4" (UID: "dd43218a-5ff0-49f3-bfbf-d2943c1214c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.396400 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-config-data" (OuterVolumeSpecName: "config-data") pod "dd43218a-5ff0-49f3-bfbf-d2943c1214c4" (UID: "dd43218a-5ff0-49f3-bfbf-d2943c1214c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.408596 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.408627 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.408637 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.408645 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.408658 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.408669 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.408678 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhbjw\" (UniqueName: \"kubernetes.io/projected/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-kube-api-access-qhbjw\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.408686 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd43218a-5ff0-49f3-bfbf-d2943c1214c4-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.824424 4796 generic.go:334] "Generic (PLEG): container finished" podID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerID="2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b" exitCode=0 Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.824887 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd43218a-5ff0-49f3-bfbf-d2943c1214c4","Type":"ContainerDied","Data":"2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b"} Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.825396 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd43218a-5ff0-49f3-bfbf-d2943c1214c4","Type":"ContainerDied","Data":"58d8d6261feb64b0130791bf7ee3d509a3bd30374e2ab0151e2f22d189610ffc"} Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.824977 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.825439 4796 scope.go:117] "RemoveContainer" containerID="99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.845971 4796 scope.go:117] "RemoveContainer" containerID="7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.859845 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.876274 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.890362 4796 scope.go:117] "RemoveContainer" containerID="2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.895267 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.895696 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccb0623-4014-43c8-afb2-76c128df28b6" containerName="watcher-decision-engine" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.895716 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccb0623-4014-43c8-afb2-76c128df28b6" containerName="watcher-decision-engine" Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.895728 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f393f072-a05c-4fcb-a197-434c890e5dd6" containerName="watcher-api" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.895736 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f393f072-a05c-4fcb-a197-434c890e5dd6" containerName="watcher-api" Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.895743 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="ceilometer-central-agent" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.895750 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="ceilometer-central-agent" Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.895760 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="sg-core" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.895766 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="sg-core" Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.895780 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="ceilometer-notification-agent" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.895786 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="ceilometer-notification-agent" Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.895792 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f393f072-a05c-4fcb-a197-434c890e5dd6" containerName="watcher-kuttl-api-log" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.895798 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f393f072-a05c-4fcb-a197-434c890e5dd6" containerName="watcher-kuttl-api-log" Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.895807 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c46c46-556a-4881-b771-8e7e99f748b0" containerName="mariadb-account-delete" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.895813 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c46c46-556a-4881-b771-8e7e99f748b0" containerName="mariadb-account-delete" Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.895833 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f704b1c0-8f43-442d-85fd-1ed65c91ac3a" containerName="watcher-applier" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.895840 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f704b1c0-8f43-442d-85fd-1ed65c91ac3a" containerName="watcher-applier" Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.895850 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="proxy-httpd" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.895857 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="proxy-httpd" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.896010 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c46c46-556a-4881-b771-8e7e99f748b0" containerName="mariadb-account-delete" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.896023 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="sg-core" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.896032 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="proxy-httpd" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.896044 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f393f072-a05c-4fcb-a197-434c890e5dd6" containerName="watcher-api" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.896053 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f393f072-a05c-4fcb-a197-434c890e5dd6" containerName="watcher-kuttl-api-log" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.896062 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="ceilometer-notification-agent" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.896073 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f704b1c0-8f43-442d-85fd-1ed65c91ac3a" containerName="watcher-applier" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.896088 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccb0623-4014-43c8-afb2-76c128df28b6" containerName="watcher-decision-engine" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.896095 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" containerName="ceilometer-central-agent" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.897593 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.900199 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.900773 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.901168 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.903648 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.916444 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67b69481-e440-4d81-a985-460e6f585375-log-httpd\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.916491 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67b69481-e440-4d81-a985-460e6f585375-run-httpd\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.916517 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5hf6\" (UniqueName: \"kubernetes.io/projected/67b69481-e440-4d81-a985-460e6f585375-kube-api-access-h5hf6\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.916618 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-scripts\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.916704 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.916843 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.916884 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.917079 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-config-data\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.938783 4796 scope.go:117] "RemoveContainer" containerID="f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.960969 4796 scope.go:117] "RemoveContainer" containerID="99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca" Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.961578 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca\": container with ID starting with 99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca not found: ID does not exist" containerID="99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.961620 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca"} err="failed to get container status \"99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca\": rpc error: code = NotFound desc = could not find container \"99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca\": container with ID starting with 99a20057997fd4c8eef220c15bbde0f5c26852f45d69e54cb3a6de0ff623d4ca not found: ID does not exist" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.961714 4796 scope.go:117] "RemoveContainer" containerID="7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d" Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.963166 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d\": container with ID starting with 7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d not found: ID does not exist" containerID="7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.963206 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d"} err="failed to get container status \"7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d\": rpc error: code = NotFound desc = could not find container \"7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d\": container with ID starting with 7b18cecbff4dfef655dc49929b095f681f55c69855762699e1500ce1274d6f9d not found: ID does not exist" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.963225 4796 scope.go:117] "RemoveContainer" containerID="2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b" Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.963637 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b\": container with ID starting with 2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b not found: ID does not exist" containerID="2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.963675 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b"} err="failed to get container status \"2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b\": rpc error: code = NotFound desc = could not find container \"2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b\": container with ID starting with 2eabad99a8907c8c8212fb2a22761d0f8eebc5124af335ee99f053d2de7a078b not found: ID does not exist" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.963733 4796 scope.go:117] "RemoveContainer" containerID="f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691" Dec 02 20:40:41 crc kubenswrapper[4796]: E1202 20:40:41.964049 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691\": container with ID starting with f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691 not found: ID does not exist" containerID="f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691" Dec 02 20:40:41 crc kubenswrapper[4796]: I1202 20:40:41.964078 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691"} err="failed to get container status \"f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691\": rpc error: code = NotFound desc = could not find container \"f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691\": container with ID starting with f32dda16a0730e17bc7d67913a6b34a56d3de669b61bc6bec06cc742c422e691 not found: ID does not exist" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.018076 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.018126 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.018245 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-config-data\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.018324 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67b69481-e440-4d81-a985-460e6f585375-log-httpd\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.018732 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67b69481-e440-4d81-a985-460e6f585375-run-httpd\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.018824 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5hf6\" (UniqueName: \"kubernetes.io/projected/67b69481-e440-4d81-a985-460e6f585375-kube-api-access-h5hf6\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.018866 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-scripts\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.018923 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.018994 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67b69481-e440-4d81-a985-460e6f585375-log-httpd\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.024311 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.032391 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-config-data\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.032540 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-scripts\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.034693 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67b69481-e440-4d81-a985-460e6f585375-run-httpd\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.035143 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.035545 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.048950 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5hf6\" (UniqueName: \"kubernetes.io/projected/67b69481-e440-4d81-a985-460e6f585375-kube-api-access-h5hf6\") pod \"ceilometer-0\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.225575 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.326627 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-jvhfg"] Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.327612 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jvhfg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.342464 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg"] Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.391382 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jvhfg"] Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.391432 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg"] Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.391530 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.393577 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.428427 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94199217-9cde-421d-a1b0-643f0954aa47-operator-scripts\") pod \"watcher-db-create-jvhfg\" (UID: \"94199217-9cde-421d-a1b0-643f0954aa47\") " pod="watcher-kuttl-default/watcher-db-create-jvhfg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.428650 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx9qm\" (UniqueName: \"kubernetes.io/projected/75b946a7-08e8-4eca-91a0-cd9ad1234ff4-kube-api-access-kx9qm\") pod \"watcher-6e73-account-create-update-ln8bg\" (UID: \"75b946a7-08e8-4eca-91a0-cd9ad1234ff4\") " pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.428739 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b946a7-08e8-4eca-91a0-cd9ad1234ff4-operator-scripts\") pod \"watcher-6e73-account-create-update-ln8bg\" (UID: \"75b946a7-08e8-4eca-91a0-cd9ad1234ff4\") " pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.428779 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nts5w\" (UniqueName: \"kubernetes.io/projected/94199217-9cde-421d-a1b0-643f0954aa47-kube-api-access-nts5w\") pod \"watcher-db-create-jvhfg\" (UID: \"94199217-9cde-421d-a1b0-643f0954aa47\") " pod="watcher-kuttl-default/watcher-db-create-jvhfg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.530371 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9qm\" (UniqueName: \"kubernetes.io/projected/75b946a7-08e8-4eca-91a0-cd9ad1234ff4-kube-api-access-kx9qm\") pod \"watcher-6e73-account-create-update-ln8bg\" (UID: \"75b946a7-08e8-4eca-91a0-cd9ad1234ff4\") " pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.530700 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b946a7-08e8-4eca-91a0-cd9ad1234ff4-operator-scripts\") pod \"watcher-6e73-account-create-update-ln8bg\" (UID: \"75b946a7-08e8-4eca-91a0-cd9ad1234ff4\") " pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.530747 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nts5w\" (UniqueName: \"kubernetes.io/projected/94199217-9cde-421d-a1b0-643f0954aa47-kube-api-access-nts5w\") pod \"watcher-db-create-jvhfg\" (UID: \"94199217-9cde-421d-a1b0-643f0954aa47\") " pod="watcher-kuttl-default/watcher-db-create-jvhfg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.530869 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94199217-9cde-421d-a1b0-643f0954aa47-operator-scripts\") pod \"watcher-db-create-jvhfg\" (UID: \"94199217-9cde-421d-a1b0-643f0954aa47\") " pod="watcher-kuttl-default/watcher-db-create-jvhfg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.531737 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94199217-9cde-421d-a1b0-643f0954aa47-operator-scripts\") pod \"watcher-db-create-jvhfg\" (UID: \"94199217-9cde-421d-a1b0-643f0954aa47\") " pod="watcher-kuttl-default/watcher-db-create-jvhfg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.532322 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b946a7-08e8-4eca-91a0-cd9ad1234ff4-operator-scripts\") pod \"watcher-6e73-account-create-update-ln8bg\" (UID: \"75b946a7-08e8-4eca-91a0-cd9ad1234ff4\") " pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.550003 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nts5w\" (UniqueName: \"kubernetes.io/projected/94199217-9cde-421d-a1b0-643f0954aa47-kube-api-access-nts5w\") pod \"watcher-db-create-jvhfg\" (UID: \"94199217-9cde-421d-a1b0-643f0954aa47\") " pod="watcher-kuttl-default/watcher-db-create-jvhfg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.550846 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx9qm\" (UniqueName: \"kubernetes.io/projected/75b946a7-08e8-4eca-91a0-cd9ad1234ff4-kube-api-access-kx9qm\") pod \"watcher-6e73-account-create-update-ln8bg\" (UID: \"75b946a7-08e8-4eca-91a0-cd9ad1234ff4\") " pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.710815 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jvhfg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.740672 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.786021 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:40:42 crc kubenswrapper[4796]: I1202 20:40:42.868240 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"67b69481-e440-4d81-a985-460e6f585375","Type":"ContainerStarted","Data":"ea002b185fa6a6d7fdd805ddbde3edd2d84159996707f31fce5731e978a43ac3"} Dec 02 20:40:43 crc kubenswrapper[4796]: I1202 20:40:43.235066 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jvhfg"] Dec 02 20:40:43 crc kubenswrapper[4796]: I1202 20:40:43.275519 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd43218a-5ff0-49f3-bfbf-d2943c1214c4" path="/var/lib/kubelet/pods/dd43218a-5ff0-49f3-bfbf-d2943c1214c4/volumes" Dec 02 20:40:43 crc kubenswrapper[4796]: I1202 20:40:43.319109 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg"] Dec 02 20:40:43 crc kubenswrapper[4796]: W1202 20:40:43.330990 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b946a7_08e8_4eca_91a0_cd9ad1234ff4.slice/crio-0f4878f12e81fec5c9a3916f20b08f8b9f7f02eb2946c80a9e5c5f789d44418f WatchSource:0}: Error finding container 0f4878f12e81fec5c9a3916f20b08f8b9f7f02eb2946c80a9e5c5f789d44418f: Status 404 returned error can't find the container with id 0f4878f12e81fec5c9a3916f20b08f8b9f7f02eb2946c80a9e5c5f789d44418f Dec 02 20:40:43 crc kubenswrapper[4796]: I1202 20:40:43.882953 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"67b69481-e440-4d81-a985-460e6f585375","Type":"ContainerStarted","Data":"c7aaff964d6e783a9bddd5c85c3fc50bcc6eef9b67c565a0d130e8d9591add73"} Dec 02 20:40:43 crc kubenswrapper[4796]: I1202 20:40:43.884574 4796 generic.go:334] "Generic (PLEG): container finished" podID="75b946a7-08e8-4eca-91a0-cd9ad1234ff4" containerID="5507c92e0ac9a759849bf26d9bf78c6635cb3662fa32177933fa5448873a326f" exitCode=0 Dec 02 20:40:43 crc kubenswrapper[4796]: I1202 20:40:43.884623 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" event={"ID":"75b946a7-08e8-4eca-91a0-cd9ad1234ff4","Type":"ContainerDied","Data":"5507c92e0ac9a759849bf26d9bf78c6635cb3662fa32177933fa5448873a326f"} Dec 02 20:40:43 crc kubenswrapper[4796]: I1202 20:40:43.884657 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" event={"ID":"75b946a7-08e8-4eca-91a0-cd9ad1234ff4","Type":"ContainerStarted","Data":"0f4878f12e81fec5c9a3916f20b08f8b9f7f02eb2946c80a9e5c5f789d44418f"} Dec 02 20:40:43 crc kubenswrapper[4796]: I1202 20:40:43.886538 4796 generic.go:334] "Generic (PLEG): container finished" podID="94199217-9cde-421d-a1b0-643f0954aa47" containerID="68c160e52360bdf8d864992702819761d715af592fe7dc3fcc9537e4410bd2fc" exitCode=0 Dec 02 20:40:43 crc kubenswrapper[4796]: I1202 20:40:43.886573 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jvhfg" event={"ID":"94199217-9cde-421d-a1b0-643f0954aa47","Type":"ContainerDied","Data":"68c160e52360bdf8d864992702819761d715af592fe7dc3fcc9537e4410bd2fc"} Dec 02 20:40:43 crc kubenswrapper[4796]: I1202 20:40:43.886612 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jvhfg" event={"ID":"94199217-9cde-421d-a1b0-643f0954aa47","Type":"ContainerStarted","Data":"cf393ac6ad84b7095364a726d42e433f065ddacc4856728f46b1d11778514a71"} Dec 02 20:40:44 crc kubenswrapper[4796]: I1202 20:40:44.897740 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"67b69481-e440-4d81-a985-460e6f585375","Type":"ContainerStarted","Data":"b0669dae1269244936eec6d63b16c4bd0fa351595e95a3d0a86ab99962a7308c"} Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.430102 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jvhfg" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.435586 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.500084 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94199217-9cde-421d-a1b0-643f0954aa47-operator-scripts\") pod \"94199217-9cde-421d-a1b0-643f0954aa47\" (UID: \"94199217-9cde-421d-a1b0-643f0954aa47\") " Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.500191 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nts5w\" (UniqueName: \"kubernetes.io/projected/94199217-9cde-421d-a1b0-643f0954aa47-kube-api-access-nts5w\") pod \"94199217-9cde-421d-a1b0-643f0954aa47\" (UID: \"94199217-9cde-421d-a1b0-643f0954aa47\") " Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.500281 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx9qm\" (UniqueName: \"kubernetes.io/projected/75b946a7-08e8-4eca-91a0-cd9ad1234ff4-kube-api-access-kx9qm\") pod \"75b946a7-08e8-4eca-91a0-cd9ad1234ff4\" (UID: \"75b946a7-08e8-4eca-91a0-cd9ad1234ff4\") " Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.500357 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b946a7-08e8-4eca-91a0-cd9ad1234ff4-operator-scripts\") pod \"75b946a7-08e8-4eca-91a0-cd9ad1234ff4\" (UID: \"75b946a7-08e8-4eca-91a0-cd9ad1234ff4\") " Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.500893 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94199217-9cde-421d-a1b0-643f0954aa47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94199217-9cde-421d-a1b0-643f0954aa47" (UID: "94199217-9cde-421d-a1b0-643f0954aa47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.500906 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b946a7-08e8-4eca-91a0-cd9ad1234ff4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75b946a7-08e8-4eca-91a0-cd9ad1234ff4" (UID: "75b946a7-08e8-4eca-91a0-cd9ad1234ff4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.501185 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b946a7-08e8-4eca-91a0-cd9ad1234ff4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.501203 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94199217-9cde-421d-a1b0-643f0954aa47-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.505771 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94199217-9cde-421d-a1b0-643f0954aa47-kube-api-access-nts5w" (OuterVolumeSpecName: "kube-api-access-nts5w") pod "94199217-9cde-421d-a1b0-643f0954aa47" (UID: "94199217-9cde-421d-a1b0-643f0954aa47"). InnerVolumeSpecName "kube-api-access-nts5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.508679 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b946a7-08e8-4eca-91a0-cd9ad1234ff4-kube-api-access-kx9qm" (OuterVolumeSpecName: "kube-api-access-kx9qm") pod "75b946a7-08e8-4eca-91a0-cd9ad1234ff4" (UID: "75b946a7-08e8-4eca-91a0-cd9ad1234ff4"). InnerVolumeSpecName "kube-api-access-kx9qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.602100 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nts5w\" (UniqueName: \"kubernetes.io/projected/94199217-9cde-421d-a1b0-643f0954aa47-kube-api-access-nts5w\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.602149 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx9qm\" (UniqueName: \"kubernetes.io/projected/75b946a7-08e8-4eca-91a0-cd9ad1234ff4-kube-api-access-kx9qm\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.909867 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jvhfg" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.909864 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jvhfg" event={"ID":"94199217-9cde-421d-a1b0-643f0954aa47","Type":"ContainerDied","Data":"cf393ac6ad84b7095364a726d42e433f065ddacc4856728f46b1d11778514a71"} Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.910296 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf393ac6ad84b7095364a726d42e433f065ddacc4856728f46b1d11778514a71" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.912640 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"67b69481-e440-4d81-a985-460e6f585375","Type":"ContainerStarted","Data":"30a5498219c6977650649380c99539c33920aeab4f235214128df9684f31b166"} Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.914407 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" event={"ID":"75b946a7-08e8-4eca-91a0-cd9ad1234ff4","Type":"ContainerDied","Data":"0f4878f12e81fec5c9a3916f20b08f8b9f7f02eb2946c80a9e5c5f789d44418f"} Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.914439 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f4878f12e81fec5c9a3916f20b08f8b9f7f02eb2946c80a9e5c5f789d44418f" Dec 02 20:40:45 crc kubenswrapper[4796]: I1202 20:40:45.914478 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg" Dec 02 20:40:46 crc kubenswrapper[4796]: I1202 20:40:46.924400 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"67b69481-e440-4d81-a985-460e6f585375","Type":"ContainerStarted","Data":"fcc25f05be5265f22444731219537c1a4a18e97d847d305b76beb6d8ba32b87b"} Dec 02 20:40:46 crc kubenswrapper[4796]: I1202 20:40:46.925956 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:40:46 crc kubenswrapper[4796]: I1202 20:40:46.950907 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.784971129 podStartE2EDuration="5.950882521s" podCreationTimestamp="2025-12-02 20:40:41 +0000 UTC" firstStartedPulling="2025-12-02 20:40:42.803898583 +0000 UTC m=+1725.807274117" lastFinishedPulling="2025-12-02 20:40:45.969809975 +0000 UTC m=+1728.973185509" observedRunningTime="2025-12-02 20:40:46.942239694 +0000 UTC m=+1729.945615228" watchObservedRunningTime="2025-12-02 20:40:46.950882521 +0000 UTC m=+1729.954258065" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.675396 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h"] Dec 02 20:40:47 crc kubenswrapper[4796]: E1202 20:40:47.675798 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94199217-9cde-421d-a1b0-643f0954aa47" containerName="mariadb-database-create" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.675820 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="94199217-9cde-421d-a1b0-643f0954aa47" containerName="mariadb-database-create" Dec 02 20:40:47 crc kubenswrapper[4796]: E1202 20:40:47.675838 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b946a7-08e8-4eca-91a0-cd9ad1234ff4" containerName="mariadb-account-create-update" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.675850 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b946a7-08e8-4eca-91a0-cd9ad1234ff4" containerName="mariadb-account-create-update" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.676072 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b946a7-08e8-4eca-91a0-cd9ad1234ff4" containerName="mariadb-account-create-update" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.676095 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="94199217-9cde-421d-a1b0-643f0954aa47" containerName="mariadb-database-create" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.676815 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.680449 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-nt22x" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.680587 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.683972 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h"] Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.739830 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-kgw9h\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.739896 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-config-data\") pod \"watcher-kuttl-db-sync-kgw9h\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.739934 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7scz\" (UniqueName: \"kubernetes.io/projected/3692da8f-12ea-4a1d-8107-0f6272035f3d-kube-api-access-j7scz\") pod \"watcher-kuttl-db-sync-kgw9h\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.740079 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-db-sync-config-data\") pod \"watcher-kuttl-db-sync-kgw9h\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.842318 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-db-sync-config-data\") pod \"watcher-kuttl-db-sync-kgw9h\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.842452 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-kgw9h\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.842497 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-config-data\") pod \"watcher-kuttl-db-sync-kgw9h\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.842530 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7scz\" (UniqueName: \"kubernetes.io/projected/3692da8f-12ea-4a1d-8107-0f6272035f3d-kube-api-access-j7scz\") pod \"watcher-kuttl-db-sync-kgw9h\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.849019 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-db-sync-config-data\") pod \"watcher-kuttl-db-sync-kgw9h\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.849318 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-kgw9h\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.861399 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-config-data\") pod \"watcher-kuttl-db-sync-kgw9h\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.867129 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7scz\" (UniqueName: \"kubernetes.io/projected/3692da8f-12ea-4a1d-8107-0f6272035f3d-kube-api-access-j7scz\") pod \"watcher-kuttl-db-sync-kgw9h\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:47 crc kubenswrapper[4796]: I1202 20:40:47.996223 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:48 crc kubenswrapper[4796]: I1202 20:40:48.266773 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:40:48 crc kubenswrapper[4796]: E1202 20:40:48.267499 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:40:48 crc kubenswrapper[4796]: I1202 20:40:48.541052 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h"] Dec 02 20:40:48 crc kubenswrapper[4796]: I1202 20:40:48.945019 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" event={"ID":"3692da8f-12ea-4a1d-8107-0f6272035f3d","Type":"ContainerStarted","Data":"a7ef55e420b57fc9c2d8e9534b7e6f99c9732ebe3f31478e5c34ca6618669192"} Dec 02 20:40:48 crc kubenswrapper[4796]: I1202 20:40:48.945102 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" event={"ID":"3692da8f-12ea-4a1d-8107-0f6272035f3d","Type":"ContainerStarted","Data":"e849e76d90f4bfe6c21cb299a22706377ec1dd218e556b9a6403e071a99ff7c4"} Dec 02 20:40:48 crc kubenswrapper[4796]: I1202 20:40:48.969093 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" podStartSLOduration=1.96907459 podStartE2EDuration="1.96907459s" podCreationTimestamp="2025-12-02 20:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:40:48.961876497 +0000 UTC m=+1731.965252051" watchObservedRunningTime="2025-12-02 20:40:48.96907459 +0000 UTC m=+1731.972450144" Dec 02 20:40:51 crc kubenswrapper[4796]: I1202 20:40:51.973580 4796 generic.go:334] "Generic (PLEG): container finished" podID="3692da8f-12ea-4a1d-8107-0f6272035f3d" containerID="a7ef55e420b57fc9c2d8e9534b7e6f99c9732ebe3f31478e5c34ca6618669192" exitCode=0 Dec 02 20:40:51 crc kubenswrapper[4796]: I1202 20:40:51.973700 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" event={"ID":"3692da8f-12ea-4a1d-8107-0f6272035f3d","Type":"ContainerDied","Data":"a7ef55e420b57fc9c2d8e9534b7e6f99c9732ebe3f31478e5c34ca6618669192"} Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.383754 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.535034 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-db-sync-config-data\") pod \"3692da8f-12ea-4a1d-8107-0f6272035f3d\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.535189 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-config-data\") pod \"3692da8f-12ea-4a1d-8107-0f6272035f3d\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.535361 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7scz\" (UniqueName: \"kubernetes.io/projected/3692da8f-12ea-4a1d-8107-0f6272035f3d-kube-api-access-j7scz\") pod \"3692da8f-12ea-4a1d-8107-0f6272035f3d\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.535391 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-combined-ca-bundle\") pod \"3692da8f-12ea-4a1d-8107-0f6272035f3d\" (UID: \"3692da8f-12ea-4a1d-8107-0f6272035f3d\") " Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.555353 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3692da8f-12ea-4a1d-8107-0f6272035f3d-kube-api-access-j7scz" (OuterVolumeSpecName: "kube-api-access-j7scz") pod "3692da8f-12ea-4a1d-8107-0f6272035f3d" (UID: "3692da8f-12ea-4a1d-8107-0f6272035f3d"). InnerVolumeSpecName "kube-api-access-j7scz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.555563 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3692da8f-12ea-4a1d-8107-0f6272035f3d" (UID: "3692da8f-12ea-4a1d-8107-0f6272035f3d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.571413 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3692da8f-12ea-4a1d-8107-0f6272035f3d" (UID: "3692da8f-12ea-4a1d-8107-0f6272035f3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.590678 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-config-data" (OuterVolumeSpecName: "config-data") pod "3692da8f-12ea-4a1d-8107-0f6272035f3d" (UID: "3692da8f-12ea-4a1d-8107-0f6272035f3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.636853 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7scz\" (UniqueName: \"kubernetes.io/projected/3692da8f-12ea-4a1d-8107-0f6272035f3d-kube-api-access-j7scz\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.636901 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.636922 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.636940 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3692da8f-12ea-4a1d-8107-0f6272035f3d-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.994190 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" event={"ID":"3692da8f-12ea-4a1d-8107-0f6272035f3d","Type":"ContainerDied","Data":"e849e76d90f4bfe6c21cb299a22706377ec1dd218e556b9a6403e071a99ff7c4"} Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.994240 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e849e76d90f4bfe6c21cb299a22706377ec1dd218e556b9a6403e071a99ff7c4" Dec 02 20:40:53 crc kubenswrapper[4796]: I1202 20:40:53.994344 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.157843 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:40:54 crc kubenswrapper[4796]: E1202 20:40:54.158280 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3692da8f-12ea-4a1d-8107-0f6272035f3d" containerName="watcher-kuttl-db-sync" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.158302 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3692da8f-12ea-4a1d-8107-0f6272035f3d" containerName="watcher-kuttl-db-sync" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.158496 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3692da8f-12ea-4a1d-8107-0f6272035f3d" containerName="watcher-kuttl-db-sync" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.159436 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.161717 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-nt22x" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.161884 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.178824 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.298145 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.299570 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.308094 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.308228 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.314863 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.316006 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.318183 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.328190 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.349325 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.349381 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.349444 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e614179-fc0a-45eb-9004-1ab496f7043e-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.349488 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4vt6\" (UniqueName: \"kubernetes.io/projected/9e614179-fc0a-45eb-9004-1ab496f7043e-kube-api-access-n4vt6\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.349524 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.349548 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451208 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451319 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4vt6\" (UniqueName: \"kubernetes.io/projected/9e614179-fc0a-45eb-9004-1ab496f7043e-kube-api-access-n4vt6\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451375 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451419 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451460 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c28fe42-3924-426f-81a8-9faed8a0f82f-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451539 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451591 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451650 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85da558-69ae-4fd0-95a3-e1f7b55f6392-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451705 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spfqm\" (UniqueName: \"kubernetes.io/projected/c85da558-69ae-4fd0-95a3-e1f7b55f6392-kube-api-access-spfqm\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451736 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451786 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451841 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451889 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451938 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.451977 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdph\" (UniqueName: \"kubernetes.io/projected/9c28fe42-3924-426f-81a8-9faed8a0f82f-kube-api-access-hqdph\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.452012 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.452061 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e614179-fc0a-45eb-9004-1ab496f7043e-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.457094 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e614179-fc0a-45eb-9004-1ab496f7043e-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.457175 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.457636 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.457756 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.461113 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.474532 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4vt6\" (UniqueName: \"kubernetes.io/projected/9e614179-fc0a-45eb-9004-1ab496f7043e-kube-api-access-n4vt6\") pod \"watcher-kuttl-api-0\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.504672 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.554565 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.555174 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.555218 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.555286 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c28fe42-3924-426f-81a8-9faed8a0f82f-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.555366 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85da558-69ae-4fd0-95a3-e1f7b55f6392-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.555402 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spfqm\" (UniqueName: \"kubernetes.io/projected/c85da558-69ae-4fd0-95a3-e1f7b55f6392-kube-api-access-spfqm\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.555431 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.555506 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.555556 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.555612 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqdph\" (UniqueName: \"kubernetes.io/projected/9c28fe42-3924-426f-81a8-9faed8a0f82f-kube-api-access-hqdph\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.555651 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.556161 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85da558-69ae-4fd0-95a3-e1f7b55f6392-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.558613 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.558998 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.559422 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.560409 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.561021 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.562055 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.559634 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c28fe42-3924-426f-81a8-9faed8a0f82f-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.575531 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.577847 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spfqm\" (UniqueName: \"kubernetes.io/projected/c85da558-69ae-4fd0-95a3-e1f7b55f6392-kube-api-access-spfqm\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.580539 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqdph\" (UniqueName: \"kubernetes.io/projected/9c28fe42-3924-426f-81a8-9faed8a0f82f-kube-api-access-hqdph\") pod \"watcher-kuttl-applier-0\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.618410 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.644299 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:40:54 crc kubenswrapper[4796]: I1202 20:40:54.809016 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:40:55 crc kubenswrapper[4796]: I1202 20:40:55.008545 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9e614179-fc0a-45eb-9004-1ab496f7043e","Type":"ContainerStarted","Data":"00dff3823cadfcfc0ded0159333a27a47a14017355f152922c074d4735aa274d"} Dec 02 20:40:55 crc kubenswrapper[4796]: I1202 20:40:55.132658 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:40:55 crc kubenswrapper[4796]: W1202 20:40:55.138418 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c28fe42_3924_426f_81a8_9faed8a0f82f.slice/crio-40da32c3aa72b6bf67d869b303bc888b545c66661dbc1538c32176fe1ca7e117 WatchSource:0}: Error finding container 40da32c3aa72b6bf67d869b303bc888b545c66661dbc1538c32176fe1ca7e117: Status 404 returned error can't find the container with id 40da32c3aa72b6bf67d869b303bc888b545c66661dbc1538c32176fe1ca7e117 Dec 02 20:40:55 crc kubenswrapper[4796]: I1202 20:40:55.228044 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:40:55 crc kubenswrapper[4796]: W1202 20:40:55.233106 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc85da558_69ae_4fd0_95a3_e1f7b55f6392.slice/crio-41d34db2bb24d3da8a66e52edca71ab2199fe26a2a20e428a399e9b695484118 WatchSource:0}: Error finding container 41d34db2bb24d3da8a66e52edca71ab2199fe26a2a20e428a399e9b695484118: Status 404 returned error can't find the container with id 41d34db2bb24d3da8a66e52edca71ab2199fe26a2a20e428a399e9b695484118 Dec 02 20:40:56 crc kubenswrapper[4796]: I1202 20:40:56.030715 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"c85da558-69ae-4fd0-95a3-e1f7b55f6392","Type":"ContainerStarted","Data":"4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903"} Dec 02 20:40:56 crc kubenswrapper[4796]: I1202 20:40:56.030972 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"c85da558-69ae-4fd0-95a3-e1f7b55f6392","Type":"ContainerStarted","Data":"41d34db2bb24d3da8a66e52edca71ab2199fe26a2a20e428a399e9b695484118"} Dec 02 20:40:56 crc kubenswrapper[4796]: I1202 20:40:56.037209 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"9c28fe42-3924-426f-81a8-9faed8a0f82f","Type":"ContainerStarted","Data":"5cb36a0be37c1e63f5e65e64da3c8d3504b9870e50b30cee239273d88ce584db"} Dec 02 20:40:56 crc kubenswrapper[4796]: I1202 20:40:56.037359 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"9c28fe42-3924-426f-81a8-9faed8a0f82f","Type":"ContainerStarted","Data":"40da32c3aa72b6bf67d869b303bc888b545c66661dbc1538c32176fe1ca7e117"} Dec 02 20:40:56 crc kubenswrapper[4796]: I1202 20:40:56.040862 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9e614179-fc0a-45eb-9004-1ab496f7043e","Type":"ContainerStarted","Data":"4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad"} Dec 02 20:40:56 crc kubenswrapper[4796]: I1202 20:40:56.040897 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9e614179-fc0a-45eb-9004-1ab496f7043e","Type":"ContainerStarted","Data":"dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79"} Dec 02 20:40:56 crc kubenswrapper[4796]: I1202 20:40:56.041651 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:56 crc kubenswrapper[4796]: I1202 20:40:56.063519 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.063496628 podStartE2EDuration="2.063496628s" podCreationTimestamp="2025-12-02 20:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:40:56.053969219 +0000 UTC m=+1739.057344753" watchObservedRunningTime="2025-12-02 20:40:56.063496628 +0000 UTC m=+1739.066872162" Dec 02 20:40:56 crc kubenswrapper[4796]: I1202 20:40:56.084319 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.084301079 podStartE2EDuration="2.084301079s" podCreationTimestamp="2025-12-02 20:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:40:56.083597102 +0000 UTC m=+1739.086972636" watchObservedRunningTime="2025-12-02 20:40:56.084301079 +0000 UTC m=+1739.087676613" Dec 02 20:40:56 crc kubenswrapper[4796]: I1202 20:40:56.103537 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.103519221 podStartE2EDuration="2.103519221s" podCreationTimestamp="2025-12-02 20:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:40:56.102495356 +0000 UTC m=+1739.105870900" watchObservedRunningTime="2025-12-02 20:40:56.103519221 +0000 UTC m=+1739.106894755" Dec 02 20:40:58 crc kubenswrapper[4796]: I1202 20:40:58.056510 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 20:40:58 crc kubenswrapper[4796]: I1202 20:40:58.344959 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:59 crc kubenswrapper[4796]: I1202 20:40:59.265448 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:40:59 crc kubenswrapper[4796]: E1202 20:40:59.266283 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:40:59 crc kubenswrapper[4796]: I1202 20:40:59.505656 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:40:59 crc kubenswrapper[4796]: I1202 20:40:59.619764 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:01 crc kubenswrapper[4796]: I1202 20:41:01.052365 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-create-6htqp"] Dec 02 20:41:01 crc kubenswrapper[4796]: I1202 20:41:01.060037 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-create-6htqp"] Dec 02 20:41:01 crc kubenswrapper[4796]: I1202 20:41:01.289036 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f" path="/var/lib/kubelet/pods/0bb8f0a5-a41a-44f7-b7a4-d6a45a9f7b2f/volumes" Dec 02 20:41:02 crc kubenswrapper[4796]: I1202 20:41:02.040138 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn"] Dec 02 20:41:02 crc kubenswrapper[4796]: I1202 20:41:02.052484 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-b9ad-account-create-update-b8ltn"] Dec 02 20:41:03 crc kubenswrapper[4796]: I1202 20:41:03.296993 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfee3cc8-9e6e-48fb-a390-2185c467ddf4" path="/var/lib/kubelet/pods/bfee3cc8-9e6e-48fb-a390-2185c467ddf4/volumes" Dec 02 20:41:04 crc kubenswrapper[4796]: I1202 20:41:04.506198 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:04 crc kubenswrapper[4796]: I1202 20:41:04.523418 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:04 crc kubenswrapper[4796]: I1202 20:41:04.619339 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:04 crc kubenswrapper[4796]: I1202 20:41:04.645506 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:04 crc kubenswrapper[4796]: I1202 20:41:04.659808 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:04 crc kubenswrapper[4796]: I1202 20:41:04.680998 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:05 crc kubenswrapper[4796]: I1202 20:41:05.155585 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:05 crc kubenswrapper[4796]: I1202 20:41:05.162351 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:05 crc kubenswrapper[4796]: I1202 20:41:05.187443 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:05 crc kubenswrapper[4796]: I1202 20:41:05.195688 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:07 crc kubenswrapper[4796]: I1202 20:41:07.325552 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:07 crc kubenswrapper[4796]: I1202 20:41:07.326018 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="ceilometer-central-agent" containerID="cri-o://c7aaff964d6e783a9bddd5c85c3fc50bcc6eef9b67c565a0d130e8d9591add73" gracePeriod=30 Dec 02 20:41:07 crc kubenswrapper[4796]: I1202 20:41:07.326343 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="proxy-httpd" containerID="cri-o://fcc25f05be5265f22444731219537c1a4a18e97d847d305b76beb6d8ba32b87b" gracePeriod=30 Dec 02 20:41:07 crc kubenswrapper[4796]: I1202 20:41:07.326406 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="ceilometer-notification-agent" containerID="cri-o://b0669dae1269244936eec6d63b16c4bd0fa351595e95a3d0a86ab99962a7308c" gracePeriod=30 Dec 02 20:41:07 crc kubenswrapper[4796]: I1202 20:41:07.326542 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="sg-core" containerID="cri-o://30a5498219c6977650649380c99539c33920aeab4f235214128df9684f31b166" gracePeriod=30 Dec 02 20:41:07 crc kubenswrapper[4796]: I1202 20:41:07.712029 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.208:3000/\": read tcp 10.217.0.2:36936->10.217.0.208:3000: read: connection reset by peer" Dec 02 20:41:08 crc kubenswrapper[4796]: I1202 20:41:08.188093 4796 generic.go:334] "Generic (PLEG): container finished" podID="67b69481-e440-4d81-a985-460e6f585375" containerID="fcc25f05be5265f22444731219537c1a4a18e97d847d305b76beb6d8ba32b87b" exitCode=0 Dec 02 20:41:08 crc kubenswrapper[4796]: I1202 20:41:08.188125 4796 generic.go:334] "Generic (PLEG): container finished" podID="67b69481-e440-4d81-a985-460e6f585375" containerID="30a5498219c6977650649380c99539c33920aeab4f235214128df9684f31b166" exitCode=2 Dec 02 20:41:08 crc kubenswrapper[4796]: I1202 20:41:08.188137 4796 generic.go:334] "Generic (PLEG): container finished" podID="67b69481-e440-4d81-a985-460e6f585375" containerID="c7aaff964d6e783a9bddd5c85c3fc50bcc6eef9b67c565a0d130e8d9591add73" exitCode=0 Dec 02 20:41:08 crc kubenswrapper[4796]: I1202 20:41:08.188141 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"67b69481-e440-4d81-a985-460e6f585375","Type":"ContainerDied","Data":"fcc25f05be5265f22444731219537c1a4a18e97d847d305b76beb6d8ba32b87b"} Dec 02 20:41:08 crc kubenswrapper[4796]: I1202 20:41:08.188206 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"67b69481-e440-4d81-a985-460e6f585375","Type":"ContainerDied","Data":"30a5498219c6977650649380c99539c33920aeab4f235214128df9684f31b166"} Dec 02 20:41:08 crc kubenswrapper[4796]: I1202 20:41:08.188226 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"67b69481-e440-4d81-a985-460e6f585375","Type":"ContainerDied","Data":"c7aaff964d6e783a9bddd5c85c3fc50bcc6eef9b67c565a0d130e8d9591add73"} Dec 02 20:41:11 crc kubenswrapper[4796]: I1202 20:41:11.218630 4796 generic.go:334] "Generic (PLEG): container finished" podID="67b69481-e440-4d81-a985-460e6f585375" containerID="b0669dae1269244936eec6d63b16c4bd0fa351595e95a3d0a86ab99962a7308c" exitCode=0 Dec 02 20:41:11 crc kubenswrapper[4796]: I1202 20:41:11.218736 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"67b69481-e440-4d81-a985-460e6f585375","Type":"ContainerDied","Data":"b0669dae1269244936eec6d63b16c4bd0fa351595e95a3d0a86ab99962a7308c"} Dec 02 20:41:11 crc kubenswrapper[4796]: I1202 20:41:11.875567 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.002304 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-config-data\") pod \"67b69481-e440-4d81-a985-460e6f585375\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.002447 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-scripts\") pod \"67b69481-e440-4d81-a985-460e6f585375\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.002549 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-ceilometer-tls-certs\") pod \"67b69481-e440-4d81-a985-460e6f585375\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.002594 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-sg-core-conf-yaml\") pod \"67b69481-e440-4d81-a985-460e6f585375\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.003350 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67b69481-e440-4d81-a985-460e6f585375-log-httpd\") pod \"67b69481-e440-4d81-a985-460e6f585375\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.003875 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67b69481-e440-4d81-a985-460e6f585375-run-httpd\") pod \"67b69481-e440-4d81-a985-460e6f585375\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.003939 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5hf6\" (UniqueName: \"kubernetes.io/projected/67b69481-e440-4d81-a985-460e6f585375-kube-api-access-h5hf6\") pod \"67b69481-e440-4d81-a985-460e6f585375\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.004021 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-combined-ca-bundle\") pod \"67b69481-e440-4d81-a985-460e6f585375\" (UID: \"67b69481-e440-4d81-a985-460e6f585375\") " Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.004124 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67b69481-e440-4d81-a985-460e6f585375-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "67b69481-e440-4d81-a985-460e6f585375" (UID: "67b69481-e440-4d81-a985-460e6f585375"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.004388 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67b69481-e440-4d81-a985-460e6f585375-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "67b69481-e440-4d81-a985-460e6f585375" (UID: "67b69481-e440-4d81-a985-460e6f585375"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.005089 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67b69481-e440-4d81-a985-460e6f585375-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.005110 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67b69481-e440-4d81-a985-460e6f585375-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.025539 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67b69481-e440-4d81-a985-460e6f585375-kube-api-access-h5hf6" (OuterVolumeSpecName: "kube-api-access-h5hf6") pod "67b69481-e440-4d81-a985-460e6f585375" (UID: "67b69481-e440-4d81-a985-460e6f585375"). InnerVolumeSpecName "kube-api-access-h5hf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.025685 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-scripts" (OuterVolumeSpecName: "scripts") pod "67b69481-e440-4d81-a985-460e6f585375" (UID: "67b69481-e440-4d81-a985-460e6f585375"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.061459 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "67b69481-e440-4d81-a985-460e6f585375" (UID: "67b69481-e440-4d81-a985-460e6f585375"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.079357 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "67b69481-e440-4d81-a985-460e6f585375" (UID: "67b69481-e440-4d81-a985-460e6f585375"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.093728 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67b69481-e440-4d81-a985-460e6f585375" (UID: "67b69481-e440-4d81-a985-460e6f585375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.106245 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.106297 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.106307 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.106317 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.106325 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5hf6\" (UniqueName: \"kubernetes.io/projected/67b69481-e440-4d81-a985-460e6f585375-kube-api-access-h5hf6\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.140377 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-config-data" (OuterVolumeSpecName: "config-data") pod "67b69481-e440-4d81-a985-460e6f585375" (UID: "67b69481-e440-4d81-a985-460e6f585375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.207969 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67b69481-e440-4d81-a985-460e6f585375-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.231021 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"67b69481-e440-4d81-a985-460e6f585375","Type":"ContainerDied","Data":"ea002b185fa6a6d7fdd805ddbde3edd2d84159996707f31fce5731e978a43ac3"} Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.231081 4796 scope.go:117] "RemoveContainer" containerID="fcc25f05be5265f22444731219537c1a4a18e97d847d305b76beb6d8ba32b87b" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.231092 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.259164 4796 scope.go:117] "RemoveContainer" containerID="30a5498219c6977650649380c99539c33920aeab4f235214128df9684f31b166" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.279452 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.280989 4796 scope.go:117] "RemoveContainer" containerID="b0669dae1269244936eec6d63b16c4bd0fa351595e95a3d0a86ab99962a7308c" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.298365 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.306559 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:12 crc kubenswrapper[4796]: E1202 20:41:12.306912 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="proxy-httpd" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.306929 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="proxy-httpd" Dec 02 20:41:12 crc kubenswrapper[4796]: E1202 20:41:12.306949 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="ceilometer-central-agent" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.306955 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="ceilometer-central-agent" Dec 02 20:41:12 crc kubenswrapper[4796]: E1202 20:41:12.306969 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="sg-core" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.306975 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="sg-core" Dec 02 20:41:12 crc kubenswrapper[4796]: E1202 20:41:12.306991 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="ceilometer-notification-agent" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.306997 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="ceilometer-notification-agent" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.307160 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="ceilometer-central-agent" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.307175 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="proxy-httpd" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.307188 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="sg-core" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.307199 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="67b69481-e440-4d81-a985-460e6f585375" containerName="ceilometer-notification-agent" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.309734 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.314629 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.314919 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.315220 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.317221 4796 scope.go:117] "RemoveContainer" containerID="c7aaff964d6e783a9bddd5c85c3fc50bcc6eef9b67c565a0d130e8d9591add73" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.320483 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-scripts\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.320650 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-config-data\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.320752 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39c6f786-7734-4653-8274-4f6cf80c38ac-run-httpd\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.320787 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.320808 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.320829 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.320875 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39c6f786-7734-4653-8274-4f6cf80c38ac-log-httpd\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.320969 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2jqp\" (UniqueName: \"kubernetes.io/projected/39c6f786-7734-4653-8274-4f6cf80c38ac-kube-api-access-n2jqp\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.343624 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.423458 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-scripts\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.423595 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-config-data\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.423703 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39c6f786-7734-4653-8274-4f6cf80c38ac-run-httpd\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.423743 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.423773 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.423795 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.423858 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39c6f786-7734-4653-8274-4f6cf80c38ac-log-httpd\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.423969 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2jqp\" (UniqueName: \"kubernetes.io/projected/39c6f786-7734-4653-8274-4f6cf80c38ac-kube-api-access-n2jqp\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.424207 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39c6f786-7734-4653-8274-4f6cf80c38ac-run-httpd\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.424785 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39c6f786-7734-4653-8274-4f6cf80c38ac-log-httpd\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.427630 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.427719 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-scripts\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.428974 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.430033 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.430467 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-config-data\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.445078 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2jqp\" (UniqueName: \"kubernetes.io/projected/39c6f786-7734-4653-8274-4f6cf80c38ac-kube-api-access-n2jqp\") pod \"ceilometer-0\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:12 crc kubenswrapper[4796]: I1202 20:41:12.634153 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.239418 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:13 crc kubenswrapper[4796]: W1202 20:41:13.270435 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39c6f786_7734_4653_8274_4f6cf80c38ac.slice/crio-ff10e2167d631cb78c7be383605c68550d77b6325bf35ad87e7935a28989d383 WatchSource:0}: Error finding container ff10e2167d631cb78c7be383605c68550d77b6325bf35ad87e7935a28989d383: Status 404 returned error can't find the container with id ff10e2167d631cb78c7be383605c68550d77b6325bf35ad87e7935a28989d383 Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.283672 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67b69481-e440-4d81-a985-460e6f585375" path="/var/lib/kubelet/pods/67b69481-e440-4d81-a985-460e6f585375/volumes" Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.390469 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h"] Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.399529 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kgw9h"] Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.434063 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.434579 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="c85da558-69ae-4fd0-95a3-e1f7b55f6392" containerName="watcher-decision-engine" containerID="cri-o://4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903" gracePeriod=30 Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.488751 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.489110 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="9c28fe42-3924-426f-81a8-9faed8a0f82f" containerName="watcher-applier" containerID="cri-o://5cb36a0be37c1e63f5e65e64da3c8d3504b9870e50b30cee239273d88ce584db" gracePeriod=30 Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.510807 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher6e73-account-delete-ckn2h"] Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.511849 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.564480 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher6e73-account-delete-ckn2h"] Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.591882 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.592232 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9e614179-fc0a-45eb-9004-1ab496f7043e" containerName="watcher-kuttl-api-log" containerID="cri-o://4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad" gracePeriod=30 Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.592377 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9e614179-fc0a-45eb-9004-1ab496f7043e" containerName="watcher-api" containerID="cri-o://dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79" gracePeriod=30 Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.647446 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81bea742-8b3a-480a-89cc-e70a964f645a-operator-scripts\") pod \"watcher6e73-account-delete-ckn2h\" (UID: \"81bea742-8b3a-480a-89cc-e70a964f645a\") " pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.647759 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97wxm\" (UniqueName: \"kubernetes.io/projected/81bea742-8b3a-480a-89cc-e70a964f645a-kube-api-access-97wxm\") pod \"watcher6e73-account-delete-ckn2h\" (UID: \"81bea742-8b3a-480a-89cc-e70a964f645a\") " pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.749172 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97wxm\" (UniqueName: \"kubernetes.io/projected/81bea742-8b3a-480a-89cc-e70a964f645a-kube-api-access-97wxm\") pod \"watcher6e73-account-delete-ckn2h\" (UID: \"81bea742-8b3a-480a-89cc-e70a964f645a\") " pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.749281 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81bea742-8b3a-480a-89cc-e70a964f645a-operator-scripts\") pod \"watcher6e73-account-delete-ckn2h\" (UID: \"81bea742-8b3a-480a-89cc-e70a964f645a\") " pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.749999 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81bea742-8b3a-480a-89cc-e70a964f645a-operator-scripts\") pod \"watcher6e73-account-delete-ckn2h\" (UID: \"81bea742-8b3a-480a-89cc-e70a964f645a\") " pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.773388 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97wxm\" (UniqueName: \"kubernetes.io/projected/81bea742-8b3a-480a-89cc-e70a964f645a-kube-api-access-97wxm\") pod \"watcher6e73-account-delete-ckn2h\" (UID: \"81bea742-8b3a-480a-89cc-e70a964f645a\") " pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" Dec 02 20:41:13 crc kubenswrapper[4796]: I1202 20:41:13.833726 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" Dec 02 20:41:14 crc kubenswrapper[4796]: I1202 20:41:14.248933 4796 generic.go:334] "Generic (PLEG): container finished" podID="9e614179-fc0a-45eb-9004-1ab496f7043e" containerID="4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad" exitCode=143 Dec 02 20:41:14 crc kubenswrapper[4796]: I1202 20:41:14.249029 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9e614179-fc0a-45eb-9004-1ab496f7043e","Type":"ContainerDied","Data":"4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad"} Dec 02 20:41:14 crc kubenswrapper[4796]: I1202 20:41:14.251329 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"39c6f786-7734-4653-8274-4f6cf80c38ac","Type":"ContainerStarted","Data":"ee2cc79fc370a14be079c0c0327fbd4af78114b5292e0e92b64cfca3d5fa0e19"} Dec 02 20:41:14 crc kubenswrapper[4796]: I1202 20:41:14.251368 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"39c6f786-7734-4653-8274-4f6cf80c38ac","Type":"ContainerStarted","Data":"ff10e2167d631cb78c7be383605c68550d77b6325bf35ad87e7935a28989d383"} Dec 02 20:41:14 crc kubenswrapper[4796]: I1202 20:41:14.265643 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:41:14 crc kubenswrapper[4796]: E1202 20:41:14.265902 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:41:14 crc kubenswrapper[4796]: I1202 20:41:14.378812 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher6e73-account-delete-ckn2h"] Dec 02 20:41:14 crc kubenswrapper[4796]: I1202 20:41:14.507428 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9e614179-fc0a-45eb-9004-1ab496f7043e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.212:9322/\": dial tcp 10.217.0.212:9322: connect: connection refused" Dec 02 20:41:14 crc kubenswrapper[4796]: I1202 20:41:14.507660 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9e614179-fc0a-45eb-9004-1ab496f7043e" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.212:9322/\": dial tcp 10.217.0.212:9322: connect: connection refused" Dec 02 20:41:14 crc kubenswrapper[4796]: E1202 20:41:14.624087 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cb36a0be37c1e63f5e65e64da3c8d3504b9870e50b30cee239273d88ce584db" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:41:14 crc kubenswrapper[4796]: E1202 20:41:14.633092 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cb36a0be37c1e63f5e65e64da3c8d3504b9870e50b30cee239273d88ce584db" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:41:14 crc kubenswrapper[4796]: E1202 20:41:14.636100 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cb36a0be37c1e63f5e65e64da3c8d3504b9870e50b30cee239273d88ce584db" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 02 20:41:14 crc kubenswrapper[4796]: E1202 20:41:14.636141 4796 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="9c28fe42-3924-426f-81a8-9faed8a0f82f" containerName="watcher-applier" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.008205 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.180090 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-config-data\") pod \"9e614179-fc0a-45eb-9004-1ab496f7043e\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.180146 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-combined-ca-bundle\") pod \"9e614179-fc0a-45eb-9004-1ab496f7043e\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.180218 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4vt6\" (UniqueName: \"kubernetes.io/projected/9e614179-fc0a-45eb-9004-1ab496f7043e-kube-api-access-n4vt6\") pod \"9e614179-fc0a-45eb-9004-1ab496f7043e\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.180305 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-cert-memcached-mtls\") pod \"9e614179-fc0a-45eb-9004-1ab496f7043e\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.180335 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-custom-prometheus-ca\") pod \"9e614179-fc0a-45eb-9004-1ab496f7043e\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.180354 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e614179-fc0a-45eb-9004-1ab496f7043e-logs\") pod \"9e614179-fc0a-45eb-9004-1ab496f7043e\" (UID: \"9e614179-fc0a-45eb-9004-1ab496f7043e\") " Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.180970 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e614179-fc0a-45eb-9004-1ab496f7043e-logs" (OuterVolumeSpecName: "logs") pod "9e614179-fc0a-45eb-9004-1ab496f7043e" (UID: "9e614179-fc0a-45eb-9004-1ab496f7043e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.190189 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e614179-fc0a-45eb-9004-1ab496f7043e-kube-api-access-n4vt6" (OuterVolumeSpecName: "kube-api-access-n4vt6") pod "9e614179-fc0a-45eb-9004-1ab496f7043e" (UID: "9e614179-fc0a-45eb-9004-1ab496f7043e"). InnerVolumeSpecName "kube-api-access-n4vt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.226331 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e614179-fc0a-45eb-9004-1ab496f7043e" (UID: "9e614179-fc0a-45eb-9004-1ab496f7043e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.242290 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9e614179-fc0a-45eb-9004-1ab496f7043e" (UID: "9e614179-fc0a-45eb-9004-1ab496f7043e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.246504 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-config-data" (OuterVolumeSpecName: "config-data") pod "9e614179-fc0a-45eb-9004-1ab496f7043e" (UID: "9e614179-fc0a-45eb-9004-1ab496f7043e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.266470 4796 generic.go:334] "Generic (PLEG): container finished" podID="81bea742-8b3a-480a-89cc-e70a964f645a" containerID="a1883ed18e1868e34754ad44fa243126f21f33f232e552f8b1fa5f8e79a34e7c" exitCode=0 Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.269484 4796 generic.go:334] "Generic (PLEG): container finished" podID="9e614179-fc0a-45eb-9004-1ab496f7043e" containerID="dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79" exitCode=0 Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.269654 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.282439 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.282500 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4vt6\" (UniqueName: \"kubernetes.io/projected/9e614179-fc0a-45eb-9004-1ab496f7043e-kube-api-access-n4vt6\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.282514 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.282527 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e614179-fc0a-45eb-9004-1ab496f7043e-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.282539 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.285482 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3692da8f-12ea-4a1d-8107-0f6272035f3d" path="/var/lib/kubelet/pods/3692da8f-12ea-4a1d-8107-0f6272035f3d/volumes" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.303352 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "9e614179-fc0a-45eb-9004-1ab496f7043e" (UID: "9e614179-fc0a-45eb-9004-1ab496f7043e"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.318086 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" event={"ID":"81bea742-8b3a-480a-89cc-e70a964f645a","Type":"ContainerDied","Data":"a1883ed18e1868e34754ad44fa243126f21f33f232e552f8b1fa5f8e79a34e7c"} Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.318128 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" event={"ID":"81bea742-8b3a-480a-89cc-e70a964f645a","Type":"ContainerStarted","Data":"560a4f585330d7de055c720018e3563ce31c23eb3f891cc9489be94bb36db8cd"} Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.318139 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9e614179-fc0a-45eb-9004-1ab496f7043e","Type":"ContainerDied","Data":"dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79"} Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.318155 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9e614179-fc0a-45eb-9004-1ab496f7043e","Type":"ContainerDied","Data":"00dff3823cadfcfc0ded0159333a27a47a14017355f152922c074d4735aa274d"} Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.318164 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"39c6f786-7734-4653-8274-4f6cf80c38ac","Type":"ContainerStarted","Data":"4c81ed0b47f14c98083166b5a0fe7b575b951ae9572ead713e8af4d82ffb4339"} Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.318183 4796 scope.go:117] "RemoveContainer" containerID="dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.355071 4796 scope.go:117] "RemoveContainer" containerID="4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.374386 4796 scope.go:117] "RemoveContainer" containerID="dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79" Dec 02 20:41:15 crc kubenswrapper[4796]: E1202 20:41:15.374991 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79\": container with ID starting with dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79 not found: ID does not exist" containerID="dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.375083 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79"} err="failed to get container status \"dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79\": rpc error: code = NotFound desc = could not find container \"dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79\": container with ID starting with dd5958e9a4c33789219a4e5ca1a205cf43a0951b7f369aee9f4591c04bf63a79 not found: ID does not exist" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.375119 4796 scope.go:117] "RemoveContainer" containerID="4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad" Dec 02 20:41:15 crc kubenswrapper[4796]: E1202 20:41:15.375710 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad\": container with ID starting with 4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad not found: ID does not exist" containerID="4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.375788 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad"} err="failed to get container status \"4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad\": rpc error: code = NotFound desc = could not find container \"4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad\": container with ID starting with 4203d61dc8feb54977f9f049844aff5a0d3de631d2335058a52a4c8faf31e4ad not found: ID does not exist" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.385522 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9e614179-fc0a-45eb-9004-1ab496f7043e-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.604509 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:41:15 crc kubenswrapper[4796]: I1202 20:41:15.609605 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.288994 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"39c6f786-7734-4653-8274-4f6cf80c38ac","Type":"ContainerStarted","Data":"d2611b250c35d8ad0dcf3756f3a0d4e8321d4b888042dd7c09571f405e07982c"} Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.293038 4796 generic.go:334] "Generic (PLEG): container finished" podID="9c28fe42-3924-426f-81a8-9faed8a0f82f" containerID="5cb36a0be37c1e63f5e65e64da3c8d3504b9870e50b30cee239273d88ce584db" exitCode=0 Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.293156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"9c28fe42-3924-426f-81a8-9faed8a0f82f","Type":"ContainerDied","Data":"5cb36a0be37c1e63f5e65e64da3c8d3504b9870e50b30cee239273d88ce584db"} Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.689961 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.724860 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.791098 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.918184 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-combined-ca-bundle\") pod \"9c28fe42-3924-426f-81a8-9faed8a0f82f\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.918275 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c28fe42-3924-426f-81a8-9faed8a0f82f-logs\") pod \"9c28fe42-3924-426f-81a8-9faed8a0f82f\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.918309 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-config-data\") pod \"9c28fe42-3924-426f-81a8-9faed8a0f82f\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.918358 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqdph\" (UniqueName: \"kubernetes.io/projected/9c28fe42-3924-426f-81a8-9faed8a0f82f-kube-api-access-hqdph\") pod \"9c28fe42-3924-426f-81a8-9faed8a0f82f\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.918387 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81bea742-8b3a-480a-89cc-e70a964f645a-operator-scripts\") pod \"81bea742-8b3a-480a-89cc-e70a964f645a\" (UID: \"81bea742-8b3a-480a-89cc-e70a964f645a\") " Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.918438 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97wxm\" (UniqueName: \"kubernetes.io/projected/81bea742-8b3a-480a-89cc-e70a964f645a-kube-api-access-97wxm\") pod \"81bea742-8b3a-480a-89cc-e70a964f645a\" (UID: \"81bea742-8b3a-480a-89cc-e70a964f645a\") " Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.918462 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-cert-memcached-mtls\") pod \"9c28fe42-3924-426f-81a8-9faed8a0f82f\" (UID: \"9c28fe42-3924-426f-81a8-9faed8a0f82f\") " Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.918757 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c28fe42-3924-426f-81a8-9faed8a0f82f-logs" (OuterVolumeSpecName: "logs") pod "9c28fe42-3924-426f-81a8-9faed8a0f82f" (UID: "9c28fe42-3924-426f-81a8-9faed8a0f82f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.919375 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81bea742-8b3a-480a-89cc-e70a964f645a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81bea742-8b3a-480a-89cc-e70a964f645a" (UID: "81bea742-8b3a-480a-89cc-e70a964f645a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.924051 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c28fe42-3924-426f-81a8-9faed8a0f82f-kube-api-access-hqdph" (OuterVolumeSpecName: "kube-api-access-hqdph") pod "9c28fe42-3924-426f-81a8-9faed8a0f82f" (UID: "9c28fe42-3924-426f-81a8-9faed8a0f82f"). InnerVolumeSpecName "kube-api-access-hqdph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.926537 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81bea742-8b3a-480a-89cc-e70a964f645a-kube-api-access-97wxm" (OuterVolumeSpecName: "kube-api-access-97wxm") pod "81bea742-8b3a-480a-89cc-e70a964f645a" (UID: "81bea742-8b3a-480a-89cc-e70a964f645a"). InnerVolumeSpecName "kube-api-access-97wxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.954203 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c28fe42-3924-426f-81a8-9faed8a0f82f" (UID: "9c28fe42-3924-426f-81a8-9faed8a0f82f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:16 crc kubenswrapper[4796]: I1202 20:41:16.975387 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-config-data" (OuterVolumeSpecName: "config-data") pod "9c28fe42-3924-426f-81a8-9faed8a0f82f" (UID: "9c28fe42-3924-426f-81a8-9faed8a0f82f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.007545 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "9c28fe42-3924-426f-81a8-9faed8a0f82f" (UID: "9c28fe42-3924-426f-81a8-9faed8a0f82f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.021157 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c28fe42-3924-426f-81a8-9faed8a0f82f-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.021188 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.021200 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqdph\" (UniqueName: \"kubernetes.io/projected/9c28fe42-3924-426f-81a8-9faed8a0f82f-kube-api-access-hqdph\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.021233 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81bea742-8b3a-480a-89cc-e70a964f645a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.021247 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97wxm\" (UniqueName: \"kubernetes.io/projected/81bea742-8b3a-480a-89cc-e70a964f645a-kube-api-access-97wxm\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.021296 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.021305 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c28fe42-3924-426f-81a8-9faed8a0f82f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.277871 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e614179-fc0a-45eb-9004-1ab496f7043e" path="/var/lib/kubelet/pods/9e614179-fc0a-45eb-9004-1ab496f7043e/volumes" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.304931 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"9c28fe42-3924-426f-81a8-9faed8a0f82f","Type":"ContainerDied","Data":"40da32c3aa72b6bf67d869b303bc888b545c66661dbc1538c32176fe1ca7e117"} Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.305009 4796 scope.go:117] "RemoveContainer" containerID="5cb36a0be37c1e63f5e65e64da3c8d3504b9870e50b30cee239273d88ce584db" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.307985 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.308563 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.309200 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher6e73-account-delete-ckn2h" event={"ID":"81bea742-8b3a-480a-89cc-e70a964f645a","Type":"ContainerDied","Data":"560a4f585330d7de055c720018e3563ce31c23eb3f891cc9489be94bb36db8cd"} Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.309279 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="560a4f585330d7de055c720018e3563ce31c23eb3f891cc9489be94bb36db8cd" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.316207 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"39c6f786-7734-4653-8274-4f6cf80c38ac","Type":"ContainerStarted","Data":"977c29af7e420c8e7bfca808c1c9756aedf0fce7f20584edb01842c71abaaaf5"} Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.316690 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="ceilometer-central-agent" containerID="cri-o://ee2cc79fc370a14be079c0c0327fbd4af78114b5292e0e92b64cfca3d5fa0e19" gracePeriod=30 Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.317081 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.317146 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="proxy-httpd" containerID="cri-o://977c29af7e420c8e7bfca808c1c9756aedf0fce7f20584edb01842c71abaaaf5" gracePeriod=30 Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.317805 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="sg-core" containerID="cri-o://d2611b250c35d8ad0dcf3756f3a0d4e8321d4b888042dd7c09571f405e07982c" gracePeriod=30 Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.317852 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="ceilometer-notification-agent" containerID="cri-o://4c81ed0b47f14c98083166b5a0fe7b575b951ae9572ead713e8af4d82ffb4339" gracePeriod=30 Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.365733 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.382548 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:41:17 crc kubenswrapper[4796]: I1202 20:41:17.386714 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.778623962 podStartE2EDuration="5.386690703s" podCreationTimestamp="2025-12-02 20:41:12 +0000 UTC" firstStartedPulling="2025-12-02 20:41:13.275902935 +0000 UTC m=+1756.279278469" lastFinishedPulling="2025-12-02 20:41:16.883969676 +0000 UTC m=+1759.887345210" observedRunningTime="2025-12-02 20:41:17.37577344 +0000 UTC m=+1760.379148974" watchObservedRunningTime="2025-12-02 20:41:17.386690703 +0000 UTC m=+1760.390066247" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.094862 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.279220 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85da558-69ae-4fd0-95a3-e1f7b55f6392-logs\") pod \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.279279 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spfqm\" (UniqueName: \"kubernetes.io/projected/c85da558-69ae-4fd0-95a3-e1f7b55f6392-kube-api-access-spfqm\") pod \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.279365 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-combined-ca-bundle\") pod \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.279441 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-cert-memcached-mtls\") pod \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.279513 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-config-data\") pod \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.279541 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-custom-prometheus-ca\") pod \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\" (UID: \"c85da558-69ae-4fd0-95a3-e1f7b55f6392\") " Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.279767 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c85da558-69ae-4fd0-95a3-e1f7b55f6392-logs" (OuterVolumeSpecName: "logs") pod "c85da558-69ae-4fd0-95a3-e1f7b55f6392" (UID: "c85da558-69ae-4fd0-95a3-e1f7b55f6392"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.280012 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85da558-69ae-4fd0-95a3-e1f7b55f6392-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.311999 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85da558-69ae-4fd0-95a3-e1f7b55f6392-kube-api-access-spfqm" (OuterVolumeSpecName: "kube-api-access-spfqm") pod "c85da558-69ae-4fd0-95a3-e1f7b55f6392" (UID: "c85da558-69ae-4fd0-95a3-e1f7b55f6392"). InnerVolumeSpecName "kube-api-access-spfqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.340946 4796 generic.go:334] "Generic (PLEG): container finished" podID="c85da558-69ae-4fd0-95a3-e1f7b55f6392" containerID="4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903" exitCode=0 Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.341208 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.342365 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"c85da558-69ae-4fd0-95a3-e1f7b55f6392","Type":"ContainerDied","Data":"4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903"} Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.342418 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"c85da558-69ae-4fd0-95a3-e1f7b55f6392","Type":"ContainerDied","Data":"41d34db2bb24d3da8a66e52edca71ab2199fe26a2a20e428a399e9b695484118"} Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.342446 4796 scope.go:117] "RemoveContainer" containerID="4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.354345 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c85da558-69ae-4fd0-95a3-e1f7b55f6392" (UID: "c85da558-69ae-4fd0-95a3-e1f7b55f6392"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.361078 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c85da558-69ae-4fd0-95a3-e1f7b55f6392" (UID: "c85da558-69ae-4fd0-95a3-e1f7b55f6392"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.382630 4796 generic.go:334] "Generic (PLEG): container finished" podID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerID="977c29af7e420c8e7bfca808c1c9756aedf0fce7f20584edb01842c71abaaaf5" exitCode=0 Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.382666 4796 generic.go:334] "Generic (PLEG): container finished" podID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerID="d2611b250c35d8ad0dcf3756f3a0d4e8321d4b888042dd7c09571f405e07982c" exitCode=2 Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.382674 4796 generic.go:334] "Generic (PLEG): container finished" podID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerID="4c81ed0b47f14c98083166b5a0fe7b575b951ae9572ead713e8af4d82ffb4339" exitCode=0 Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.382698 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"39c6f786-7734-4653-8274-4f6cf80c38ac","Type":"ContainerDied","Data":"977c29af7e420c8e7bfca808c1c9756aedf0fce7f20584edb01842c71abaaaf5"} Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.382726 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"39c6f786-7734-4653-8274-4f6cf80c38ac","Type":"ContainerDied","Data":"d2611b250c35d8ad0dcf3756f3a0d4e8321d4b888042dd7c09571f405e07982c"} Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.382739 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"39c6f786-7734-4653-8274-4f6cf80c38ac","Type":"ContainerDied","Data":"4c81ed0b47f14c98083166b5a0fe7b575b951ae9572ead713e8af4d82ffb4339"} Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.388016 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.388070 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spfqm\" (UniqueName: \"kubernetes.io/projected/c85da558-69ae-4fd0-95a3-e1f7b55f6392-kube-api-access-spfqm\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.388089 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.411470 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-config-data" (OuterVolumeSpecName: "config-data") pod "c85da558-69ae-4fd0-95a3-e1f7b55f6392" (UID: "c85da558-69ae-4fd0-95a3-e1f7b55f6392"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.413816 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "c85da558-69ae-4fd0-95a3-e1f7b55f6392" (UID: "c85da558-69ae-4fd0-95a3-e1f7b55f6392"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.433979 4796 scope.go:117] "RemoveContainer" containerID="4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903" Dec 02 20:41:18 crc kubenswrapper[4796]: E1202 20:41:18.434571 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903\": container with ID starting with 4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903 not found: ID does not exist" containerID="4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.434625 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903"} err="failed to get container status \"4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903\": rpc error: code = NotFound desc = could not find container \"4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903\": container with ID starting with 4f02237aa9c7036cf051ea905115cc6419b5d5a2a58f5e3e143fd279a4700903 not found: ID does not exist" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.490197 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.490243 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85da558-69ae-4fd0-95a3-e1f7b55f6392-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.553052 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jvhfg"] Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.561793 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jvhfg"] Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.577366 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher6e73-account-delete-ckn2h"] Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.582780 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg"] Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.591987 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-6e73-account-create-update-ln8bg"] Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.597217 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher6e73-account-delete-ckn2h"] Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.634964 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-9qxxc"] Dec 02 20:41:18 crc kubenswrapper[4796]: E1202 20:41:18.635467 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e614179-fc0a-45eb-9004-1ab496f7043e" containerName="watcher-kuttl-api-log" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.635488 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e614179-fc0a-45eb-9004-1ab496f7043e" containerName="watcher-kuttl-api-log" Dec 02 20:41:18 crc kubenswrapper[4796]: E1202 20:41:18.635496 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81bea742-8b3a-480a-89cc-e70a964f645a" containerName="mariadb-account-delete" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.635503 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="81bea742-8b3a-480a-89cc-e70a964f645a" containerName="mariadb-account-delete" Dec 02 20:41:18 crc kubenswrapper[4796]: E1202 20:41:18.635538 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c28fe42-3924-426f-81a8-9faed8a0f82f" containerName="watcher-applier" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.635546 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c28fe42-3924-426f-81a8-9faed8a0f82f" containerName="watcher-applier" Dec 02 20:41:18 crc kubenswrapper[4796]: E1202 20:41:18.635561 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e614179-fc0a-45eb-9004-1ab496f7043e" containerName="watcher-api" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.635570 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e614179-fc0a-45eb-9004-1ab496f7043e" containerName="watcher-api" Dec 02 20:41:18 crc kubenswrapper[4796]: E1202 20:41:18.635580 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85da558-69ae-4fd0-95a3-e1f7b55f6392" containerName="watcher-decision-engine" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.635586 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85da558-69ae-4fd0-95a3-e1f7b55f6392" containerName="watcher-decision-engine" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.635779 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e614179-fc0a-45eb-9004-1ab496f7043e" containerName="watcher-api" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.635800 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85da558-69ae-4fd0-95a3-e1f7b55f6392" containerName="watcher-decision-engine" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.635812 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="81bea742-8b3a-480a-89cc-e70a964f645a" containerName="mariadb-account-delete" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.635821 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c28fe42-3924-426f-81a8-9faed8a0f82f" containerName="watcher-applier" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.635921 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e614179-fc0a-45eb-9004-1ab496f7043e" containerName="watcher-kuttl-api-log" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.636785 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-9qxxc" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.646131 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-9qxxc"] Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.694269 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.711141 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.739972 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-xmjt8"] Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.741070 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.745081 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.756044 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-xmjt8"] Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.796374 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wndj\" (UniqueName: \"kubernetes.io/projected/3645ebb8-9035-4992-aeef-90d8c16a70ac-kube-api-access-7wndj\") pod \"watcher-db-create-9qxxc\" (UID: \"3645ebb8-9035-4992-aeef-90d8c16a70ac\") " pod="watcher-kuttl-default/watcher-db-create-9qxxc" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.796479 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3645ebb8-9035-4992-aeef-90d8c16a70ac-operator-scripts\") pod \"watcher-db-create-9qxxc\" (UID: \"3645ebb8-9035-4992-aeef-90d8c16a70ac\") " pod="watcher-kuttl-default/watcher-db-create-9qxxc" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.898233 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3645ebb8-9035-4992-aeef-90d8c16a70ac-operator-scripts\") pod \"watcher-db-create-9qxxc\" (UID: \"3645ebb8-9035-4992-aeef-90d8c16a70ac\") " pod="watcher-kuttl-default/watcher-db-create-9qxxc" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.898803 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jzbm\" (UniqueName: \"kubernetes.io/projected/551fb351-61fa-4dd6-80a1-663a21ce01a9-kube-api-access-2jzbm\") pod \"watcher-test-account-create-update-xmjt8\" (UID: \"551fb351-61fa-4dd6-80a1-663a21ce01a9\") " pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.898924 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/551fb351-61fa-4dd6-80a1-663a21ce01a9-operator-scripts\") pod \"watcher-test-account-create-update-xmjt8\" (UID: \"551fb351-61fa-4dd6-80a1-663a21ce01a9\") " pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.899110 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wndj\" (UniqueName: \"kubernetes.io/projected/3645ebb8-9035-4992-aeef-90d8c16a70ac-kube-api-access-7wndj\") pod \"watcher-db-create-9qxxc\" (UID: \"3645ebb8-9035-4992-aeef-90d8c16a70ac\") " pod="watcher-kuttl-default/watcher-db-create-9qxxc" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.900682 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3645ebb8-9035-4992-aeef-90d8c16a70ac-operator-scripts\") pod \"watcher-db-create-9qxxc\" (UID: \"3645ebb8-9035-4992-aeef-90d8c16a70ac\") " pod="watcher-kuttl-default/watcher-db-create-9qxxc" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.921511 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wndj\" (UniqueName: \"kubernetes.io/projected/3645ebb8-9035-4992-aeef-90d8c16a70ac-kube-api-access-7wndj\") pod \"watcher-db-create-9qxxc\" (UID: \"3645ebb8-9035-4992-aeef-90d8c16a70ac\") " pod="watcher-kuttl-default/watcher-db-create-9qxxc" Dec 02 20:41:18 crc kubenswrapper[4796]: I1202 20:41:18.955095 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-9qxxc" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.000946 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jzbm\" (UniqueName: \"kubernetes.io/projected/551fb351-61fa-4dd6-80a1-663a21ce01a9-kube-api-access-2jzbm\") pod \"watcher-test-account-create-update-xmjt8\" (UID: \"551fb351-61fa-4dd6-80a1-663a21ce01a9\") " pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.001023 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/551fb351-61fa-4dd6-80a1-663a21ce01a9-operator-scripts\") pod \"watcher-test-account-create-update-xmjt8\" (UID: \"551fb351-61fa-4dd6-80a1-663a21ce01a9\") " pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.001812 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/551fb351-61fa-4dd6-80a1-663a21ce01a9-operator-scripts\") pod \"watcher-test-account-create-update-xmjt8\" (UID: \"551fb351-61fa-4dd6-80a1-663a21ce01a9\") " pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.028769 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jzbm\" (UniqueName: \"kubernetes.io/projected/551fb351-61fa-4dd6-80a1-663a21ce01a9-kube-api-access-2jzbm\") pod \"watcher-test-account-create-update-xmjt8\" (UID: \"551fb351-61fa-4dd6-80a1-663a21ce01a9\") " pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.090162 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.279217 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b946a7-08e8-4eca-91a0-cd9ad1234ff4" path="/var/lib/kubelet/pods/75b946a7-08e8-4eca-91a0-cd9ad1234ff4/volumes" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.280404 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81bea742-8b3a-480a-89cc-e70a964f645a" path="/var/lib/kubelet/pods/81bea742-8b3a-480a-89cc-e70a964f645a/volumes" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.280889 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94199217-9cde-421d-a1b0-643f0954aa47" path="/var/lib/kubelet/pods/94199217-9cde-421d-a1b0-643f0954aa47/volumes" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.281981 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c28fe42-3924-426f-81a8-9faed8a0f82f" path="/var/lib/kubelet/pods/9c28fe42-3924-426f-81a8-9faed8a0f82f/volumes" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.282514 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85da558-69ae-4fd0-95a3-e1f7b55f6392" path="/var/lib/kubelet/pods/c85da558-69ae-4fd0-95a3-e1f7b55f6392/volumes" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.406605 4796 generic.go:334] "Generic (PLEG): container finished" podID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerID="ee2cc79fc370a14be079c0c0327fbd4af78114b5292e0e92b64cfca3d5fa0e19" exitCode=0 Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.406653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"39c6f786-7734-4653-8274-4f6cf80c38ac","Type":"ContainerDied","Data":"ee2cc79fc370a14be079c0c0327fbd4af78114b5292e0e92b64cfca3d5fa0e19"} Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.457150 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-9qxxc"] Dec 02 20:41:19 crc kubenswrapper[4796]: W1202 20:41:19.462223 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3645ebb8_9035_4992_aeef_90d8c16a70ac.slice/crio-f50d4a3088439442b805845901893d2966b6814189986b41e1341d9165420ea3 WatchSource:0}: Error finding container f50d4a3088439442b805845901893d2966b6814189986b41e1341d9165420ea3: Status 404 returned error can't find the container with id f50d4a3088439442b805845901893d2966b6814189986b41e1341d9165420ea3 Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.588579 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.652701 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-xmjt8"] Dec 02 20:41:19 crc kubenswrapper[4796]: W1202 20:41:19.665000 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod551fb351_61fa_4dd6_80a1_663a21ce01a9.slice/crio-bd8b63720716e5054b6fea489eb6882b3c8f58ea5818cc1032f8d17fc3540f49 WatchSource:0}: Error finding container bd8b63720716e5054b6fea489eb6882b3c8f58ea5818cc1032f8d17fc3540f49: Status 404 returned error can't find the container with id bd8b63720716e5054b6fea489eb6882b3c8f58ea5818cc1032f8d17fc3540f49 Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.719441 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-sg-core-conf-yaml\") pod \"39c6f786-7734-4653-8274-4f6cf80c38ac\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.719537 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39c6f786-7734-4653-8274-4f6cf80c38ac-run-httpd\") pod \"39c6f786-7734-4653-8274-4f6cf80c38ac\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.719565 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-ceilometer-tls-certs\") pod \"39c6f786-7734-4653-8274-4f6cf80c38ac\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.719636 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2jqp\" (UniqueName: \"kubernetes.io/projected/39c6f786-7734-4653-8274-4f6cf80c38ac-kube-api-access-n2jqp\") pod \"39c6f786-7734-4653-8274-4f6cf80c38ac\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.719711 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-config-data\") pod \"39c6f786-7734-4653-8274-4f6cf80c38ac\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.719802 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-combined-ca-bundle\") pod \"39c6f786-7734-4653-8274-4f6cf80c38ac\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.719840 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-scripts\") pod \"39c6f786-7734-4653-8274-4f6cf80c38ac\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.719869 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39c6f786-7734-4653-8274-4f6cf80c38ac-log-httpd\") pod \"39c6f786-7734-4653-8274-4f6cf80c38ac\" (UID: \"39c6f786-7734-4653-8274-4f6cf80c38ac\") " Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.720775 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39c6f786-7734-4653-8274-4f6cf80c38ac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "39c6f786-7734-4653-8274-4f6cf80c38ac" (UID: "39c6f786-7734-4653-8274-4f6cf80c38ac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.721294 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39c6f786-7734-4653-8274-4f6cf80c38ac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "39c6f786-7734-4653-8274-4f6cf80c38ac" (UID: "39c6f786-7734-4653-8274-4f6cf80c38ac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.725826 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-scripts" (OuterVolumeSpecName: "scripts") pod "39c6f786-7734-4653-8274-4f6cf80c38ac" (UID: "39c6f786-7734-4653-8274-4f6cf80c38ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.727763 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c6f786-7734-4653-8274-4f6cf80c38ac-kube-api-access-n2jqp" (OuterVolumeSpecName: "kube-api-access-n2jqp") pod "39c6f786-7734-4653-8274-4f6cf80c38ac" (UID: "39c6f786-7734-4653-8274-4f6cf80c38ac"). InnerVolumeSpecName "kube-api-access-n2jqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.758385 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "39c6f786-7734-4653-8274-4f6cf80c38ac" (UID: "39c6f786-7734-4653-8274-4f6cf80c38ac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.779149 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "39c6f786-7734-4653-8274-4f6cf80c38ac" (UID: "39c6f786-7734-4653-8274-4f6cf80c38ac"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.809865 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39c6f786-7734-4653-8274-4f6cf80c38ac" (UID: "39c6f786-7734-4653-8274-4f6cf80c38ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.822402 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.822430 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.822439 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39c6f786-7734-4653-8274-4f6cf80c38ac-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.822448 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.822458 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39c6f786-7734-4653-8274-4f6cf80c38ac-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.822466 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.822476 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2jqp\" (UniqueName: \"kubernetes.io/projected/39c6f786-7734-4653-8274-4f6cf80c38ac-kube-api-access-n2jqp\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.827416 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-config-data" (OuterVolumeSpecName: "config-data") pod "39c6f786-7734-4653-8274-4f6cf80c38ac" (UID: "39c6f786-7734-4653-8274-4f6cf80c38ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:19 crc kubenswrapper[4796]: I1202 20:41:19.924240 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6f786-7734-4653-8274-4f6cf80c38ac-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.420733 4796 generic.go:334] "Generic (PLEG): container finished" podID="551fb351-61fa-4dd6-80a1-663a21ce01a9" containerID="5e32fe3c85dd13bb7181071b88d8bc75cc19007a9d13eb6b3ad747eb21be075a" exitCode=0 Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.420812 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" event={"ID":"551fb351-61fa-4dd6-80a1-663a21ce01a9","Type":"ContainerDied","Data":"5e32fe3c85dd13bb7181071b88d8bc75cc19007a9d13eb6b3ad747eb21be075a"} Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.420872 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" event={"ID":"551fb351-61fa-4dd6-80a1-663a21ce01a9","Type":"ContainerStarted","Data":"bd8b63720716e5054b6fea489eb6882b3c8f58ea5818cc1032f8d17fc3540f49"} Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.423145 4796 generic.go:334] "Generic (PLEG): container finished" podID="3645ebb8-9035-4992-aeef-90d8c16a70ac" containerID="d792899726d8331d184f4d39221b083e056ec1e34d1f4bacc163c1478c6dc799" exitCode=0 Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.423423 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-9qxxc" event={"ID":"3645ebb8-9035-4992-aeef-90d8c16a70ac","Type":"ContainerDied","Data":"d792899726d8331d184f4d39221b083e056ec1e34d1f4bacc163c1478c6dc799"} Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.423668 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-9qxxc" event={"ID":"3645ebb8-9035-4992-aeef-90d8c16a70ac","Type":"ContainerStarted","Data":"f50d4a3088439442b805845901893d2966b6814189986b41e1341d9165420ea3"} Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.427087 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"39c6f786-7734-4653-8274-4f6cf80c38ac","Type":"ContainerDied","Data":"ff10e2167d631cb78c7be383605c68550d77b6325bf35ad87e7935a28989d383"} Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.427132 4796 scope.go:117] "RemoveContainer" containerID="977c29af7e420c8e7bfca808c1c9756aedf0fce7f20584edb01842c71abaaaf5" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.427332 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.480759 4796 scope.go:117] "RemoveContainer" containerID="d2611b250c35d8ad0dcf3756f3a0d4e8321d4b888042dd7c09571f405e07982c" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.496012 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.513041 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.525977 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:20 crc kubenswrapper[4796]: E1202 20:41:20.526328 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="proxy-httpd" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.526345 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="proxy-httpd" Dec 02 20:41:20 crc kubenswrapper[4796]: E1202 20:41:20.526366 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="sg-core" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.526372 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="sg-core" Dec 02 20:41:20 crc kubenswrapper[4796]: E1202 20:41:20.526387 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="ceilometer-central-agent" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.526395 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="ceilometer-central-agent" Dec 02 20:41:20 crc kubenswrapper[4796]: E1202 20:41:20.526410 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="ceilometer-notification-agent" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.526416 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="ceilometer-notification-agent" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.526554 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="ceilometer-notification-agent" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.526572 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="ceilometer-central-agent" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.526581 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="sg-core" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.526588 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" containerName="proxy-httpd" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.528036 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.528447 4796 scope.go:117] "RemoveContainer" containerID="4c81ed0b47f14c98083166b5a0fe7b575b951ae9572ead713e8af4d82ffb4339" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.530440 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.531481 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.531702 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.550108 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.574695 4796 scope.go:117] "RemoveContainer" containerID="ee2cc79fc370a14be079c0c0327fbd4af78114b5292e0e92b64cfca3d5fa0e19" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.638607 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dppxq\" (UniqueName: \"kubernetes.io/projected/41a65a68-e5b2-46fb-8b70-33587ef4bff6-kube-api-access-dppxq\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.638671 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.638706 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41a65a68-e5b2-46fb-8b70-33587ef4bff6-run-httpd\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.638781 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.638817 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.638852 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-scripts\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.638890 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41a65a68-e5b2-46fb-8b70-33587ef4bff6-log-httpd\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.638922 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-config-data\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.745709 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.745820 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.745890 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-scripts\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.745949 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41a65a68-e5b2-46fb-8b70-33587ef4bff6-log-httpd\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.746003 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-config-data\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.746079 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dppxq\" (UniqueName: \"kubernetes.io/projected/41a65a68-e5b2-46fb-8b70-33587ef4bff6-kube-api-access-dppxq\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.746138 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.746209 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41a65a68-e5b2-46fb-8b70-33587ef4bff6-run-httpd\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.746527 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41a65a68-e5b2-46fb-8b70-33587ef4bff6-log-httpd\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.747307 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41a65a68-e5b2-46fb-8b70-33587ef4bff6-run-httpd\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.752777 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.753908 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.757221 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.760024 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-config-data\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.767649 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-scripts\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.784850 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dppxq\" (UniqueName: \"kubernetes.io/projected/41a65a68-e5b2-46fb-8b70-33587ef4bff6-kube-api-access-dppxq\") pod \"ceilometer-0\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:20 crc kubenswrapper[4796]: I1202 20:41:20.861819 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.274289 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c6f786-7734-4653-8274-4f6cf80c38ac" path="/var/lib/kubelet/pods/39c6f786-7734-4653-8274-4f6cf80c38ac/volumes" Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.438984 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:21 crc kubenswrapper[4796]: W1202 20:41:21.453159 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41a65a68_e5b2_46fb_8b70_33587ef4bff6.slice/crio-b67c264f1431a35683adb393adf9ea2dcc9beb53516e022ad5892f209eb8a408 WatchSource:0}: Error finding container b67c264f1431a35683adb393adf9ea2dcc9beb53516e022ad5892f209eb8a408: Status 404 returned error can't find the container with id b67c264f1431a35683adb393adf9ea2dcc9beb53516e022ad5892f209eb8a408 Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.855209 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.861683 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-9qxxc" Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.870778 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jzbm\" (UniqueName: \"kubernetes.io/projected/551fb351-61fa-4dd6-80a1-663a21ce01a9-kube-api-access-2jzbm\") pod \"551fb351-61fa-4dd6-80a1-663a21ce01a9\" (UID: \"551fb351-61fa-4dd6-80a1-663a21ce01a9\") " Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.870885 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3645ebb8-9035-4992-aeef-90d8c16a70ac-operator-scripts\") pod \"3645ebb8-9035-4992-aeef-90d8c16a70ac\" (UID: \"3645ebb8-9035-4992-aeef-90d8c16a70ac\") " Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.870917 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wndj\" (UniqueName: \"kubernetes.io/projected/3645ebb8-9035-4992-aeef-90d8c16a70ac-kube-api-access-7wndj\") pod \"3645ebb8-9035-4992-aeef-90d8c16a70ac\" (UID: \"3645ebb8-9035-4992-aeef-90d8c16a70ac\") " Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.870955 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/551fb351-61fa-4dd6-80a1-663a21ce01a9-operator-scripts\") pod \"551fb351-61fa-4dd6-80a1-663a21ce01a9\" (UID: \"551fb351-61fa-4dd6-80a1-663a21ce01a9\") " Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.872749 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3645ebb8-9035-4992-aeef-90d8c16a70ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3645ebb8-9035-4992-aeef-90d8c16a70ac" (UID: "3645ebb8-9035-4992-aeef-90d8c16a70ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.872779 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551fb351-61fa-4dd6-80a1-663a21ce01a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "551fb351-61fa-4dd6-80a1-663a21ce01a9" (UID: "551fb351-61fa-4dd6-80a1-663a21ce01a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.881723 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551fb351-61fa-4dd6-80a1-663a21ce01a9-kube-api-access-2jzbm" (OuterVolumeSpecName: "kube-api-access-2jzbm") pod "551fb351-61fa-4dd6-80a1-663a21ce01a9" (UID: "551fb351-61fa-4dd6-80a1-663a21ce01a9"). InnerVolumeSpecName "kube-api-access-2jzbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.882521 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3645ebb8-9035-4992-aeef-90d8c16a70ac-kube-api-access-7wndj" (OuterVolumeSpecName: "kube-api-access-7wndj") pod "3645ebb8-9035-4992-aeef-90d8c16a70ac" (UID: "3645ebb8-9035-4992-aeef-90d8c16a70ac"). InnerVolumeSpecName "kube-api-access-7wndj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.973687 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3645ebb8-9035-4992-aeef-90d8c16a70ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.973721 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wndj\" (UniqueName: \"kubernetes.io/projected/3645ebb8-9035-4992-aeef-90d8c16a70ac-kube-api-access-7wndj\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.973733 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/551fb351-61fa-4dd6-80a1-663a21ce01a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:21 crc kubenswrapper[4796]: I1202 20:41:21.973742 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jzbm\" (UniqueName: \"kubernetes.io/projected/551fb351-61fa-4dd6-80a1-663a21ce01a9-kube-api-access-2jzbm\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:22 crc kubenswrapper[4796]: I1202 20:41:22.448465 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" event={"ID":"551fb351-61fa-4dd6-80a1-663a21ce01a9","Type":"ContainerDied","Data":"bd8b63720716e5054b6fea489eb6882b3c8f58ea5818cc1032f8d17fc3540f49"} Dec 02 20:41:22 crc kubenswrapper[4796]: I1202 20:41:22.448774 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd8b63720716e5054b6fea489eb6882b3c8f58ea5818cc1032f8d17fc3540f49" Dec 02 20:41:22 crc kubenswrapper[4796]: I1202 20:41:22.448530 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-xmjt8" Dec 02 20:41:22 crc kubenswrapper[4796]: I1202 20:41:22.450622 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41a65a68-e5b2-46fb-8b70-33587ef4bff6","Type":"ContainerStarted","Data":"e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10"} Dec 02 20:41:22 crc kubenswrapper[4796]: I1202 20:41:22.450688 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41a65a68-e5b2-46fb-8b70-33587ef4bff6","Type":"ContainerStarted","Data":"b67c264f1431a35683adb393adf9ea2dcc9beb53516e022ad5892f209eb8a408"} Dec 02 20:41:22 crc kubenswrapper[4796]: I1202 20:41:22.452404 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-9qxxc" event={"ID":"3645ebb8-9035-4992-aeef-90d8c16a70ac","Type":"ContainerDied","Data":"f50d4a3088439442b805845901893d2966b6814189986b41e1341d9165420ea3"} Dec 02 20:41:22 crc kubenswrapper[4796]: I1202 20:41:22.452453 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f50d4a3088439442b805845901893d2966b6814189986b41e1341d9165420ea3" Dec 02 20:41:22 crc kubenswrapper[4796]: I1202 20:41:22.452479 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-9qxxc" Dec 02 20:41:23 crc kubenswrapper[4796]: I1202 20:41:23.485336 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41a65a68-e5b2-46fb-8b70-33587ef4bff6","Type":"ContainerStarted","Data":"7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6"} Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.011478 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt"] Dec 02 20:41:24 crc kubenswrapper[4796]: E1202 20:41:24.012350 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551fb351-61fa-4dd6-80a1-663a21ce01a9" containerName="mariadb-account-create-update" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.012373 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="551fb351-61fa-4dd6-80a1-663a21ce01a9" containerName="mariadb-account-create-update" Dec 02 20:41:24 crc kubenswrapper[4796]: E1202 20:41:24.012387 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3645ebb8-9035-4992-aeef-90d8c16a70ac" containerName="mariadb-database-create" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.012395 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3645ebb8-9035-4992-aeef-90d8c16a70ac" containerName="mariadb-database-create" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.012618 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="551fb351-61fa-4dd6-80a1-663a21ce01a9" containerName="mariadb-account-create-update" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.012643 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3645ebb8-9035-4992-aeef-90d8c16a70ac" containerName="mariadb-database-create" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.013443 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.015490 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.017535 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-cxd7v" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.025681 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt"] Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.217372 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-db-sync-config-data\") pod \"watcher-kuttl-db-sync-4vnvt\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.217424 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-4vnvt\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.217711 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-config-data\") pod \"watcher-kuttl-db-sync-4vnvt\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.217754 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twldm\" (UniqueName: \"kubernetes.io/projected/858d71d0-255f-4035-8ca7-3cb9a2b70ced-kube-api-access-twldm\") pod \"watcher-kuttl-db-sync-4vnvt\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.319288 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twldm\" (UniqueName: \"kubernetes.io/projected/858d71d0-255f-4035-8ca7-3cb9a2b70ced-kube-api-access-twldm\") pod \"watcher-kuttl-db-sync-4vnvt\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.319392 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-db-sync-config-data\") pod \"watcher-kuttl-db-sync-4vnvt\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.319419 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-4vnvt\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.319481 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-config-data\") pod \"watcher-kuttl-db-sync-4vnvt\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.343072 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twldm\" (UniqueName: \"kubernetes.io/projected/858d71d0-255f-4035-8ca7-3cb9a2b70ced-kube-api-access-twldm\") pod \"watcher-kuttl-db-sync-4vnvt\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.343717 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-db-sync-config-data\") pod \"watcher-kuttl-db-sync-4vnvt\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.343778 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-config-data\") pod \"watcher-kuttl-db-sync-4vnvt\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.344532 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-4vnvt\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.500206 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41a65a68-e5b2-46fb-8b70-33587ef4bff6","Type":"ContainerStarted","Data":"74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2"} Dec 02 20:41:24 crc kubenswrapper[4796]: I1202 20:41:24.631641 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:25 crc kubenswrapper[4796]: I1202 20:41:25.216743 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt"] Dec 02 20:41:25 crc kubenswrapper[4796]: W1202 20:41:25.220096 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod858d71d0_255f_4035_8ca7_3cb9a2b70ced.slice/crio-983d3488c6de09c22d0f463d0817d35ad4698172aeaceb34648b8b6f25643acd WatchSource:0}: Error finding container 983d3488c6de09c22d0f463d0817d35ad4698172aeaceb34648b8b6f25643acd: Status 404 returned error can't find the container with id 983d3488c6de09c22d0f463d0817d35ad4698172aeaceb34648b8b6f25643acd Dec 02 20:41:25 crc kubenswrapper[4796]: I1202 20:41:25.509542 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" event={"ID":"858d71d0-255f-4035-8ca7-3cb9a2b70ced","Type":"ContainerStarted","Data":"6814b5644c4d6d26205ece9524574ea56e291c46dbad48c7fd87e2efeb1ea969"} Dec 02 20:41:25 crc kubenswrapper[4796]: I1202 20:41:25.509913 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" event={"ID":"858d71d0-255f-4035-8ca7-3cb9a2b70ced","Type":"ContainerStarted","Data":"983d3488c6de09c22d0f463d0817d35ad4698172aeaceb34648b8b6f25643acd"} Dec 02 20:41:25 crc kubenswrapper[4796]: I1202 20:41:25.513323 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41a65a68-e5b2-46fb-8b70-33587ef4bff6","Type":"ContainerStarted","Data":"02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d"} Dec 02 20:41:25 crc kubenswrapper[4796]: I1202 20:41:25.513567 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:25 crc kubenswrapper[4796]: I1202 20:41:25.549240 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" podStartSLOduration=2.5492226479999998 podStartE2EDuration="2.549222648s" podCreationTimestamp="2025-12-02 20:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:41:25.527134668 +0000 UTC m=+1768.530510202" watchObservedRunningTime="2025-12-02 20:41:25.549222648 +0000 UTC m=+1768.552598182" Dec 02 20:41:25 crc kubenswrapper[4796]: I1202 20:41:25.553793 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.228106746 podStartE2EDuration="5.553776609s" podCreationTimestamp="2025-12-02 20:41:20 +0000 UTC" firstStartedPulling="2025-12-02 20:41:21.465383049 +0000 UTC m=+1764.468758583" lastFinishedPulling="2025-12-02 20:41:24.791052912 +0000 UTC m=+1767.794428446" observedRunningTime="2025-12-02 20:41:25.548619304 +0000 UTC m=+1768.551994838" watchObservedRunningTime="2025-12-02 20:41:25.553776609 +0000 UTC m=+1768.557152143" Dec 02 20:41:27 crc kubenswrapper[4796]: I1202 20:41:27.269403 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:41:27 crc kubenswrapper[4796]: E1202 20:41:27.269821 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:41:27 crc kubenswrapper[4796]: I1202 20:41:27.518723 4796 scope.go:117] "RemoveContainer" containerID="841b4e32921827f907e6e32b11aae05394731581e2474da032bd405d4587e919" Dec 02 20:41:27 crc kubenswrapper[4796]: I1202 20:41:27.546106 4796 scope.go:117] "RemoveContainer" containerID="adf5155b5cc0b7f15b45bddc347d7e83448039f8bd75f3fb88fc89bdb7797520" Dec 02 20:41:27 crc kubenswrapper[4796]: I1202 20:41:27.595958 4796 scope.go:117] "RemoveContainer" containerID="9e9ec0ec6676b65ca53851c5c4195d0332d595aca2b1d5aced2cba3d2141287f" Dec 02 20:41:27 crc kubenswrapper[4796]: I1202 20:41:27.649379 4796 scope.go:117] "RemoveContainer" containerID="c9a43d77cdc62f84c440c20fb089546c4fac659f38121ea1027d82204fc1db07" Dec 02 20:41:27 crc kubenswrapper[4796]: I1202 20:41:27.732119 4796 scope.go:117] "RemoveContainer" containerID="ca142f76575b505b3d674f607c31a1710c67f95ec4c133323ad2fe3c1a1f7b05" Dec 02 20:41:27 crc kubenswrapper[4796]: I1202 20:41:27.849288 4796 scope.go:117] "RemoveContainer" containerID="4c50c366c50376ce0af10dbf1d28f82a1ce485ff4bd6f11ab275566227cc3db9" Dec 02 20:41:27 crc kubenswrapper[4796]: I1202 20:41:27.876425 4796 scope.go:117] "RemoveContainer" containerID="9734d9b2e03e215354b7d81cb7bc500e14e05ef81fad81d3e5c6148055876e2b" Dec 02 20:41:28 crc kubenswrapper[4796]: I1202 20:41:28.545168 4796 generic.go:334] "Generic (PLEG): container finished" podID="858d71d0-255f-4035-8ca7-3cb9a2b70ced" containerID="6814b5644c4d6d26205ece9524574ea56e291c46dbad48c7fd87e2efeb1ea969" exitCode=0 Dec 02 20:41:28 crc kubenswrapper[4796]: I1202 20:41:28.545245 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" event={"ID":"858d71d0-255f-4035-8ca7-3cb9a2b70ced","Type":"ContainerDied","Data":"6814b5644c4d6d26205ece9524574ea56e291c46dbad48c7fd87e2efeb1ea969"} Dec 02 20:41:29 crc kubenswrapper[4796]: I1202 20:41:29.921117 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.048587 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-combined-ca-bundle\") pod \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.048708 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twldm\" (UniqueName: \"kubernetes.io/projected/858d71d0-255f-4035-8ca7-3cb9a2b70ced-kube-api-access-twldm\") pod \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.048792 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-config-data\") pod \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.048862 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-db-sync-config-data\") pod \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\" (UID: \"858d71d0-255f-4035-8ca7-3cb9a2b70ced\") " Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.054471 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "858d71d0-255f-4035-8ca7-3cb9a2b70ced" (UID: "858d71d0-255f-4035-8ca7-3cb9a2b70ced"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.054533 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/858d71d0-255f-4035-8ca7-3cb9a2b70ced-kube-api-access-twldm" (OuterVolumeSpecName: "kube-api-access-twldm") pod "858d71d0-255f-4035-8ca7-3cb9a2b70ced" (UID: "858d71d0-255f-4035-8ca7-3cb9a2b70ced"). InnerVolumeSpecName "kube-api-access-twldm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.090406 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "858d71d0-255f-4035-8ca7-3cb9a2b70ced" (UID: "858d71d0-255f-4035-8ca7-3cb9a2b70ced"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.138994 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-config-data" (OuterVolumeSpecName: "config-data") pod "858d71d0-255f-4035-8ca7-3cb9a2b70ced" (UID: "858d71d0-255f-4035-8ca7-3cb9a2b70ced"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.151077 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.151345 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twldm\" (UniqueName: \"kubernetes.io/projected/858d71d0-255f-4035-8ca7-3cb9a2b70ced-kube-api-access-twldm\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.151412 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.151467 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/858d71d0-255f-4035-8ca7-3cb9a2b70ced-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.572830 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" event={"ID":"858d71d0-255f-4035-8ca7-3cb9a2b70ced","Type":"ContainerDied","Data":"983d3488c6de09c22d0f463d0817d35ad4698172aeaceb34648b8b6f25643acd"} Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.573203 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="983d3488c6de09c22d0f463d0817d35ad4698172aeaceb34648b8b6f25643acd" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.572918 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.850380 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:41:30 crc kubenswrapper[4796]: E1202 20:41:30.850774 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="858d71d0-255f-4035-8ca7-3cb9a2b70ced" containerName="watcher-kuttl-db-sync" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.850796 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="858d71d0-255f-4035-8ca7-3cb9a2b70ced" containerName="watcher-kuttl-db-sync" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.850980 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="858d71d0-255f-4035-8ca7-3cb9a2b70ced" containerName="watcher-kuttl-db-sync" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.851888 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.855758 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.856127 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-cxd7v" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.893838 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.925300 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.926969 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.964032 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.964134 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.964205 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.964242 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.964293 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe178b48-56b0-493c-a133-ca8dacce432b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.964327 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvgkm\" (UniqueName: \"kubernetes.io/projected/fe178b48-56b0-493c-a133-ca8dacce432b-kube-api-access-zvgkm\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:30 crc kubenswrapper[4796]: I1202 20:41:30.964995 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.066157 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvgkm\" (UniqueName: \"kubernetes.io/projected/fe178b48-56b0-493c-a133-ca8dacce432b-kube-api-access-zvgkm\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.066399 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.066456 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.096520 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.096586 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smghl\" (UniqueName: \"kubernetes.io/projected/abb95a24-b20f-4f5d-b988-f994a63683d7-kube-api-access-smghl\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.096643 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.096730 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.096762 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.096785 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abb95a24-b20f-4f5d-b988-f994a63683d7-logs\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.096835 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.096892 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.096909 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe178b48-56b0-493c-a133-ca8dacce432b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.098154 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.101436 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe178b48-56b0-493c-a133-ca8dacce432b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.108000 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.115340 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.117661 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvgkm\" (UniqueName: \"kubernetes.io/projected/fe178b48-56b0-493c-a133-ca8dacce432b-kube-api-access-zvgkm\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.121762 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.157096 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.158693 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.179323 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.181647 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.188808 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.199920 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.200010 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.200040 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smghl\" (UniqueName: \"kubernetes.io/projected/abb95a24-b20f-4f5d-b988-f994a63683d7-kube-api-access-smghl\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.200065 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.200100 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.200125 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abb95a24-b20f-4f5d-b988-f994a63683d7-logs\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.200645 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abb95a24-b20f-4f5d-b988-f994a63683d7-logs\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.211670 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.220482 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.220854 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.222025 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.232991 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smghl\" (UniqueName: \"kubernetes.io/projected/abb95a24-b20f-4f5d-b988-f994a63683d7-kube-api-access-smghl\") pod \"watcher-kuttl-api-1\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.243968 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.245054 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.252851 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.262566 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.306041 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.306916 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.306957 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc2x2\" (UniqueName: \"kubernetes.io/projected/6f5576a7-0257-4785-b47e-b6990f016c51-kube-api-access-xc2x2\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.306987 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5576a7-0257-4785-b47e-b6990f016c51-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.307021 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.307043 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.408803 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/067c599c-9b7a-4db5-a6a9-62f520e4b67f-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.408865 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.408901 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.408958 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc2x2\" (UniqueName: \"kubernetes.io/projected/6f5576a7-0257-4785-b47e-b6990f016c51-kube-api-access-xc2x2\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.408987 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.409024 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5576a7-0257-4785-b47e-b6990f016c51-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.409054 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.409090 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmhxm\" (UniqueName: \"kubernetes.io/projected/067c599c-9b7a-4db5-a6a9-62f520e4b67f-kube-api-access-pmhxm\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.409114 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.409147 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.409186 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.413949 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.414863 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.415224 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.433822 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc2x2\" (UniqueName: \"kubernetes.io/projected/6f5576a7-0257-4785-b47e-b6990f016c51-kube-api-access-xc2x2\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.440646 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5576a7-0257-4785-b47e-b6990f016c51-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.511430 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.511485 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.511515 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmhxm\" (UniqueName: \"kubernetes.io/projected/067c599c-9b7a-4db5-a6a9-62f520e4b67f-kube-api-access-pmhxm\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.511568 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.511614 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/067c599c-9b7a-4db5-a6a9-62f520e4b67f-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.511683 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.514088 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/067c599c-9b7a-4db5-a6a9-62f520e4b67f-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.515440 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.517196 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.517794 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.539049 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.540969 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmhxm\" (UniqueName: \"kubernetes.io/projected/067c599c-9b7a-4db5-a6a9-62f520e4b67f-kube-api-access-pmhxm\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.646989 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.664603 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.826975 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 20:41:31 crc kubenswrapper[4796]: I1202 20:41:31.943156 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.034879 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-gldrd"] Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.041825 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-gldrd"] Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.140816 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.239571 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.597757 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"abb95a24-b20f-4f5d-b988-f994a63683d7","Type":"ContainerStarted","Data":"adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281"} Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.598423 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.598440 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"abb95a24-b20f-4f5d-b988-f994a63683d7","Type":"ContainerStarted","Data":"2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4"} Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.598453 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"abb95a24-b20f-4f5d-b988-f994a63683d7","Type":"ContainerStarted","Data":"121f62730792f4171f76b8e0e5cbecf2ce95d8a984919f9a2316bbc5440c2230"} Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.616414 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fe178b48-56b0-493c-a133-ca8dacce432b","Type":"ContainerStarted","Data":"ea5eca9a5fdcc7c5ef64dd0b247ca2e9d387d618a026623ba7f7014f153f2ce4"} Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.616490 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fe178b48-56b0-493c-a133-ca8dacce432b","Type":"ContainerStarted","Data":"069381eaf59b25cd25ce5951d60e340c488bda63620937e27b5ae3610b9d2d27"} Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.616523 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.616535 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fe178b48-56b0-493c-a133-ca8dacce432b","Type":"ContainerStarted","Data":"12cb267383263f397875385fed994fb65c35e7b6f23122ab919a12799c4e7a35"} Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.618876 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"067c599c-9b7a-4db5-a6a9-62f520e4b67f","Type":"ContainerStarted","Data":"727b9b0f2943774efb0e489a034cdd1a2b4c436ad9716930deba4a40fadc4716"} Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.618915 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"067c599c-9b7a-4db5-a6a9-62f520e4b67f","Type":"ContainerStarted","Data":"a218ae6bbfc26f0e13530de6eac78991d463e92a5231e410cf0a6b6bef1f93f0"} Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.620282 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="fe178b48-56b0-493c-a133-ca8dacce432b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.221:9322/\": dial tcp 10.217.0.221:9322: connect: connection refused" Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.622618 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"6f5576a7-0257-4785-b47e-b6990f016c51","Type":"ContainerStarted","Data":"ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed"} Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.622649 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"6f5576a7-0257-4785-b47e-b6990f016c51","Type":"ContainerStarted","Data":"f1d823580a92deff2556e6191cb823b289b58fcbb9860869d300ccf4ce03d2b4"} Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.648614 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.648586776 podStartE2EDuration="2.648586776s" podCreationTimestamp="2025-12-02 20:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:41:32.634937637 +0000 UTC m=+1775.638313171" watchObservedRunningTime="2025-12-02 20:41:32.648586776 +0000 UTC m=+1775.651962310" Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.665809 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.665778219 podStartE2EDuration="2.665778219s" podCreationTimestamp="2025-12-02 20:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:41:32.658080654 +0000 UTC m=+1775.661456188" watchObservedRunningTime="2025-12-02 20:41:32.665778219 +0000 UTC m=+1775.669153753" Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.697392 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.697369519 podStartE2EDuration="1.697369519s" podCreationTimestamp="2025-12-02 20:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:41:32.686779944 +0000 UTC m=+1775.690155478" watchObservedRunningTime="2025-12-02 20:41:32.697369519 +0000 UTC m=+1775.700745053" Dec 02 20:41:32 crc kubenswrapper[4796]: I1202 20:41:32.721191 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.721164951 podStartE2EDuration="2.721164951s" podCreationTimestamp="2025-12-02 20:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:41:32.7107501 +0000 UTC m=+1775.714125654" watchObservedRunningTime="2025-12-02 20:41:32.721164951 +0000 UTC m=+1775.724540495" Dec 02 20:41:33 crc kubenswrapper[4796]: I1202 20:41:33.276229 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fca6606-3b10-4323-99f7-1baae97f5477" path="/var/lib/kubelet/pods/3fca6606-3b10-4323-99f7-1baae97f5477/volumes" Dec 02 20:41:35 crc kubenswrapper[4796]: I1202 20:41:35.216212 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:35 crc kubenswrapper[4796]: I1202 20:41:35.699556 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:35 crc kubenswrapper[4796]: E1202 20:41:35.788590 4796 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.241:60298->38.102.83.241:34197: write tcp 38.102.83.241:60298->38.102.83.241:34197: write: broken pipe Dec 02 20:41:36 crc kubenswrapper[4796]: I1202 20:41:36.188927 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:36 crc kubenswrapper[4796]: I1202 20:41:36.255595 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:36 crc kubenswrapper[4796]: I1202 20:41:36.647246 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.189506 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.210575 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.254333 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.263029 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:41 crc kubenswrapper[4796]: E1202 20:41:41.457135 4796 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.241:60522->38.102.83.241:34197: write tcp 38.102.83.241:60522->38.102.83.241:34197: write: broken pipe Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.647181 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.666071 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.679356 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.713514 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.714894 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.719459 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.723915 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.768137 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:41:41 crc kubenswrapper[4796]: I1202 20:41:41.772844 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:41:42 crc kubenswrapper[4796]: I1202 20:41:42.264690 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:41:42 crc kubenswrapper[4796]: E1202 20:41:42.265194 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:41:43 crc kubenswrapper[4796]: I1202 20:41:43.832280 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:43 crc kubenswrapper[4796]: I1202 20:41:43.832577 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="ceilometer-central-agent" containerID="cri-o://e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10" gracePeriod=30 Dec 02 20:41:43 crc kubenswrapper[4796]: I1202 20:41:43.832704 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="sg-core" containerID="cri-o://74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2" gracePeriod=30 Dec 02 20:41:43 crc kubenswrapper[4796]: I1202 20:41:43.832754 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="proxy-httpd" containerID="cri-o://02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d" gracePeriod=30 Dec 02 20:41:43 crc kubenswrapper[4796]: I1202 20:41:43.832790 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="ceilometer-notification-agent" containerID="cri-o://7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6" gracePeriod=30 Dec 02 20:41:43 crc kubenswrapper[4796]: I1202 20:41:43.852005 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 02 20:41:44 crc kubenswrapper[4796]: I1202 20:41:44.748293 4796 generic.go:334] "Generic (PLEG): container finished" podID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerID="02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d" exitCode=0 Dec 02 20:41:44 crc kubenswrapper[4796]: I1202 20:41:44.748346 4796 generic.go:334] "Generic (PLEG): container finished" podID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerID="74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2" exitCode=2 Dec 02 20:41:44 crc kubenswrapper[4796]: I1202 20:41:44.748364 4796 generic.go:334] "Generic (PLEG): container finished" podID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerID="e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10" exitCode=0 Dec 02 20:41:44 crc kubenswrapper[4796]: I1202 20:41:44.748364 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41a65a68-e5b2-46fb-8b70-33587ef4bff6","Type":"ContainerDied","Data":"02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d"} Dec 02 20:41:44 crc kubenswrapper[4796]: I1202 20:41:44.748412 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41a65a68-e5b2-46fb-8b70-33587ef4bff6","Type":"ContainerDied","Data":"74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2"} Dec 02 20:41:44 crc kubenswrapper[4796]: I1202 20:41:44.748425 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41a65a68-e5b2-46fb-8b70-33587ef4bff6","Type":"ContainerDied","Data":"e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10"} Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.573825 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.640913 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41a65a68-e5b2-46fb-8b70-33587ef4bff6-log-httpd\") pod \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.640981 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-ceilometer-tls-certs\") pod \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.641069 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-sg-core-conf-yaml\") pod \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.641088 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-scripts\") pod \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.641144 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-combined-ca-bundle\") pod \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.641171 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-config-data\") pod \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.641222 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41a65a68-e5b2-46fb-8b70-33587ef4bff6-run-httpd\") pod \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.641328 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dppxq\" (UniqueName: \"kubernetes.io/projected/41a65a68-e5b2-46fb-8b70-33587ef4bff6-kube-api-access-dppxq\") pod \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\" (UID: \"41a65a68-e5b2-46fb-8b70-33587ef4bff6\") " Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.641532 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a65a68-e5b2-46fb-8b70-33587ef4bff6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "41a65a68-e5b2-46fb-8b70-33587ef4bff6" (UID: "41a65a68-e5b2-46fb-8b70-33587ef4bff6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.641771 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41a65a68-e5b2-46fb-8b70-33587ef4bff6-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.642281 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a65a68-e5b2-46fb-8b70-33587ef4bff6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "41a65a68-e5b2-46fb-8b70-33587ef4bff6" (UID: "41a65a68-e5b2-46fb-8b70-33587ef4bff6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.646891 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a65a68-e5b2-46fb-8b70-33587ef4bff6-kube-api-access-dppxq" (OuterVolumeSpecName: "kube-api-access-dppxq") pod "41a65a68-e5b2-46fb-8b70-33587ef4bff6" (UID: "41a65a68-e5b2-46fb-8b70-33587ef4bff6"). InnerVolumeSpecName "kube-api-access-dppxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.670404 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-scripts" (OuterVolumeSpecName: "scripts") pod "41a65a68-e5b2-46fb-8b70-33587ef4bff6" (UID: "41a65a68-e5b2-46fb-8b70-33587ef4bff6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.678552 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "41a65a68-e5b2-46fb-8b70-33587ef4bff6" (UID: "41a65a68-e5b2-46fb-8b70-33587ef4bff6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.695406 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "41a65a68-e5b2-46fb-8b70-33587ef4bff6" (UID: "41a65a68-e5b2-46fb-8b70-33587ef4bff6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.719339 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41a65a68-e5b2-46fb-8b70-33587ef4bff6" (UID: "41a65a68-e5b2-46fb-8b70-33587ef4bff6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.742928 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.742961 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.742974 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.742986 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.742996 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41a65a68-e5b2-46fb-8b70-33587ef4bff6-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.743006 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dppxq\" (UniqueName: \"kubernetes.io/projected/41a65a68-e5b2-46fb-8b70-33587ef4bff6-kube-api-access-dppxq\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.743135 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-config-data" (OuterVolumeSpecName: "config-data") pod "41a65a68-e5b2-46fb-8b70-33587ef4bff6" (UID: "41a65a68-e5b2-46fb-8b70-33587ef4bff6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.811558 4796 generic.go:334] "Generic (PLEG): container finished" podID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerID="7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6" exitCode=0 Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.811598 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41a65a68-e5b2-46fb-8b70-33587ef4bff6","Type":"ContainerDied","Data":"7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6"} Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.811621 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"41a65a68-e5b2-46fb-8b70-33587ef4bff6","Type":"ContainerDied","Data":"b67c264f1431a35683adb393adf9ea2dcc9beb53516e022ad5892f209eb8a408"} Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.811626 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.811638 4796 scope.go:117] "RemoveContainer" containerID="02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.842586 4796 scope.go:117] "RemoveContainer" containerID="74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.844358 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a65a68-e5b2-46fb-8b70-33587ef4bff6-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.866458 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.875077 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.887465 4796 scope.go:117] "RemoveContainer" containerID="7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.906337 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:49 crc kubenswrapper[4796]: E1202 20:41:49.906975 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="ceilometer-notification-agent" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.907071 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="ceilometer-notification-agent" Dec 02 20:41:49 crc kubenswrapper[4796]: E1202 20:41:49.907163 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="sg-core" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.907235 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="sg-core" Dec 02 20:41:49 crc kubenswrapper[4796]: E1202 20:41:49.907358 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="proxy-httpd" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.907433 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="proxy-httpd" Dec 02 20:41:49 crc kubenswrapper[4796]: E1202 20:41:49.907524 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="ceilometer-central-agent" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.907604 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="ceilometer-central-agent" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.907888 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="ceilometer-notification-agent" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.907985 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="proxy-httpd" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.908080 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="ceilometer-central-agent" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.908170 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" containerName="sg-core" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.910878 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.911995 4796 scope.go:117] "RemoveContainer" containerID="e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.913535 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.913789 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.913909 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.914176 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.945787 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-scripts\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.945852 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cblrx\" (UniqueName: \"kubernetes.io/projected/68c7dbaf-3568-4e03-ba64-c0aafac02efd-kube-api-access-cblrx\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.945891 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c7dbaf-3568-4e03-ba64-c0aafac02efd-log-httpd\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.945963 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.946003 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c7dbaf-3568-4e03-ba64-c0aafac02efd-run-httpd\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.946023 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.946054 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.946073 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-config-data\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.950342 4796 scope.go:117] "RemoveContainer" containerID="02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d" Dec 02 20:41:49 crc kubenswrapper[4796]: E1202 20:41:49.950784 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d\": container with ID starting with 02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d not found: ID does not exist" containerID="02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.950814 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d"} err="failed to get container status \"02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d\": rpc error: code = NotFound desc = could not find container \"02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d\": container with ID starting with 02358188fa1a84788442162ac3d9c8b5c814d7ee5ec30a27dcb2e903c9cd231d not found: ID does not exist" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.950836 4796 scope.go:117] "RemoveContainer" containerID="74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2" Dec 02 20:41:49 crc kubenswrapper[4796]: E1202 20:41:49.951200 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2\": container with ID starting with 74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2 not found: ID does not exist" containerID="74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.951225 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2"} err="failed to get container status \"74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2\": rpc error: code = NotFound desc = could not find container \"74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2\": container with ID starting with 74e99c0b8854d8bf2608f0dccf204b5b7f08080b3bbdee377cbcab4b0eda49e2 not found: ID does not exist" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.951241 4796 scope.go:117] "RemoveContainer" containerID="7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6" Dec 02 20:41:49 crc kubenswrapper[4796]: E1202 20:41:49.951573 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6\": container with ID starting with 7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6 not found: ID does not exist" containerID="7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.951612 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6"} err="failed to get container status \"7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6\": rpc error: code = NotFound desc = could not find container \"7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6\": container with ID starting with 7f263aaab7360d8c6902261584ed6fcdf223a79bebe6a26cc2688252a85a1bc6 not found: ID does not exist" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.951630 4796 scope.go:117] "RemoveContainer" containerID="e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10" Dec 02 20:41:49 crc kubenswrapper[4796]: E1202 20:41:49.951850 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10\": container with ID starting with e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10 not found: ID does not exist" containerID="e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10" Dec 02 20:41:49 crc kubenswrapper[4796]: I1202 20:41:49.951873 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10"} err="failed to get container status \"e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10\": rpc error: code = NotFound desc = could not find container \"e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10\": container with ID starting with e1bf15641c426950c2d100fcacf973031097b62fafac174d36b16d8a55c6fe10 not found: ID does not exist" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.047471 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.047524 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c7dbaf-3568-4e03-ba64-c0aafac02efd-run-httpd\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.047542 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.047566 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-config-data\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.047580 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.047621 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-scripts\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.047646 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cblrx\" (UniqueName: \"kubernetes.io/projected/68c7dbaf-3568-4e03-ba64-c0aafac02efd-kube-api-access-cblrx\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.047669 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c7dbaf-3568-4e03-ba64-c0aafac02efd-log-httpd\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.048066 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c7dbaf-3568-4e03-ba64-c0aafac02efd-log-httpd\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.048236 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c7dbaf-3568-4e03-ba64-c0aafac02efd-run-httpd\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.051178 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.051693 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-scripts\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.052109 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.052138 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.055676 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-config-data\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.068594 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cblrx\" (UniqueName: \"kubernetes.io/projected/68c7dbaf-3568-4e03-ba64-c0aafac02efd-kube-api-access-cblrx\") pod \"ceilometer-0\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.241807 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.702504 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:41:50 crc kubenswrapper[4796]: I1202 20:41:50.842711 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68c7dbaf-3568-4e03-ba64-c0aafac02efd","Type":"ContainerStarted","Data":"a1c5016017933d72452ca9a0e71e5f948c8de62c48870611be8bcacc0202636d"} Dec 02 20:41:51 crc kubenswrapper[4796]: I1202 20:41:51.276455 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a65a68-e5b2-46fb-8b70-33587ef4bff6" path="/var/lib/kubelet/pods/41a65a68-e5b2-46fb-8b70-33587ef4bff6/volumes" Dec 02 20:41:51 crc kubenswrapper[4796]: I1202 20:41:51.858157 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68c7dbaf-3568-4e03-ba64-c0aafac02efd","Type":"ContainerStarted","Data":"b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7"} Dec 02 20:41:52 crc kubenswrapper[4796]: I1202 20:41:52.885692 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68c7dbaf-3568-4e03-ba64-c0aafac02efd","Type":"ContainerStarted","Data":"9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc"} Dec 02 20:41:53 crc kubenswrapper[4796]: I1202 20:41:53.898109 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68c7dbaf-3568-4e03-ba64-c0aafac02efd","Type":"ContainerStarted","Data":"7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f"} Dec 02 20:41:54 crc kubenswrapper[4796]: I1202 20:41:54.914296 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68c7dbaf-3568-4e03-ba64-c0aafac02efd","Type":"ContainerStarted","Data":"88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659"} Dec 02 20:41:54 crc kubenswrapper[4796]: I1202 20:41:54.914994 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:41:54 crc kubenswrapper[4796]: I1202 20:41:54.944513 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.3933821809999998 podStartE2EDuration="5.944489005s" podCreationTimestamp="2025-12-02 20:41:49 +0000 UTC" firstStartedPulling="2025-12-02 20:41:50.709781487 +0000 UTC m=+1793.713157021" lastFinishedPulling="2025-12-02 20:41:54.260888311 +0000 UTC m=+1797.264263845" observedRunningTime="2025-12-02 20:41:54.941269308 +0000 UTC m=+1797.944644842" watchObservedRunningTime="2025-12-02 20:41:54.944489005 +0000 UTC m=+1797.947864539" Dec 02 20:41:55 crc kubenswrapper[4796]: I1202 20:41:55.265689 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:41:55 crc kubenswrapper[4796]: E1202 20:41:55.266310 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.158354 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7"] Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.161618 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.166633 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-scripts" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.166919 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.185552 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7"] Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.266330 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-scripts-volume\") pod \"watcher-kuttl-db-purge-29411802-wpqw7\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.266734 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29411802-wpqw7\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.266800 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-config-data\") pod \"watcher-kuttl-db-purge-29411802-wpqw7\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.266888 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2wx4\" (UniqueName: \"kubernetes.io/projected/2657dcbd-334e-4677-831d-3e42c54834a3-kube-api-access-d2wx4\") pod \"watcher-kuttl-db-purge-29411802-wpqw7\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.368162 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29411802-wpqw7\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.368230 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-config-data\") pod \"watcher-kuttl-db-purge-29411802-wpqw7\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.368298 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2wx4\" (UniqueName: \"kubernetes.io/projected/2657dcbd-334e-4677-831d-3e42c54834a3-kube-api-access-d2wx4\") pod \"watcher-kuttl-db-purge-29411802-wpqw7\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.368353 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-scripts-volume\") pod \"watcher-kuttl-db-purge-29411802-wpqw7\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.373615 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-scripts-volume\") pod \"watcher-kuttl-db-purge-29411802-wpqw7\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.374498 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29411802-wpqw7\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.375181 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-config-data\") pod \"watcher-kuttl-db-purge-29411802-wpqw7\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.395857 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2wx4\" (UniqueName: \"kubernetes.io/projected/2657dcbd-334e-4677-831d-3e42c54834a3-kube-api-access-d2wx4\") pod \"watcher-kuttl-db-purge-29411802-wpqw7\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:00 crc kubenswrapper[4796]: I1202 20:42:00.495005 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:01 crc kubenswrapper[4796]: I1202 20:42:01.012004 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7"] Dec 02 20:42:01 crc kubenswrapper[4796]: I1202 20:42:01.986599 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" event={"ID":"2657dcbd-334e-4677-831d-3e42c54834a3","Type":"ContainerStarted","Data":"dbb81e850aeae73d7e6bc245ef50494ff00824dd6b3ec462280a24ced5626c50"} Dec 02 20:42:01 crc kubenswrapper[4796]: I1202 20:42:01.987063 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" event={"ID":"2657dcbd-334e-4677-831d-3e42c54834a3","Type":"ContainerStarted","Data":"89817f18ad2a29c1e835501030b8ef8ecb3435f91e369fd61906688f98fc417d"} Dec 02 20:42:02 crc kubenswrapper[4796]: I1202 20:42:02.023911 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" podStartSLOduration=2.023880242 podStartE2EDuration="2.023880242s" podCreationTimestamp="2025-12-02 20:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:42:02.008029771 +0000 UTC m=+1805.011405365" watchObservedRunningTime="2025-12-02 20:42:02.023880242 +0000 UTC m=+1805.027255816" Dec 02 20:42:02 crc kubenswrapper[4796]: E1202 20:42:02.159892 4796 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.241:57492->38.102.83.241:34197: read tcp 38.102.83.241:57492->38.102.83.241:34197: read: connection reset by peer Dec 02 20:42:04 crc kubenswrapper[4796]: I1202 20:42:04.009165 4796 generic.go:334] "Generic (PLEG): container finished" podID="2657dcbd-334e-4677-831d-3e42c54834a3" containerID="dbb81e850aeae73d7e6bc245ef50494ff00824dd6b3ec462280a24ced5626c50" exitCode=0 Dec 02 20:42:04 crc kubenswrapper[4796]: I1202 20:42:04.009245 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" event={"ID":"2657dcbd-334e-4677-831d-3e42c54834a3","Type":"ContainerDied","Data":"dbb81e850aeae73d7e6bc245ef50494ff00824dd6b3ec462280a24ced5626c50"} Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.413798 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.570836 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2wx4\" (UniqueName: \"kubernetes.io/projected/2657dcbd-334e-4677-831d-3e42c54834a3-kube-api-access-d2wx4\") pod \"2657dcbd-334e-4677-831d-3e42c54834a3\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.570919 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-scripts-volume\") pod \"2657dcbd-334e-4677-831d-3e42c54834a3\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.571129 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-config-data\") pod \"2657dcbd-334e-4677-831d-3e42c54834a3\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.571169 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-combined-ca-bundle\") pod \"2657dcbd-334e-4677-831d-3e42c54834a3\" (UID: \"2657dcbd-334e-4677-831d-3e42c54834a3\") " Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.576560 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2657dcbd-334e-4677-831d-3e42c54834a3-kube-api-access-d2wx4" (OuterVolumeSpecName: "kube-api-access-d2wx4") pod "2657dcbd-334e-4677-831d-3e42c54834a3" (UID: "2657dcbd-334e-4677-831d-3e42c54834a3"). InnerVolumeSpecName "kube-api-access-d2wx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.578074 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-scripts-volume" (OuterVolumeSpecName: "scripts-volume") pod "2657dcbd-334e-4677-831d-3e42c54834a3" (UID: "2657dcbd-334e-4677-831d-3e42c54834a3"). InnerVolumeSpecName "scripts-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.607535 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2657dcbd-334e-4677-831d-3e42c54834a3" (UID: "2657dcbd-334e-4677-831d-3e42c54834a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.629401 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-config-data" (OuterVolumeSpecName: "config-data") pod "2657dcbd-334e-4677-831d-3e42c54834a3" (UID: "2657dcbd-334e-4677-831d-3e42c54834a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.672934 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.673041 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2wx4\" (UniqueName: \"kubernetes.io/projected/2657dcbd-334e-4677-831d-3e42c54834a3-kube-api-access-d2wx4\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.673102 4796 reconciler_common.go:293] "Volume detached for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-scripts-volume\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:05 crc kubenswrapper[4796]: I1202 20:42:05.673165 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2657dcbd-334e-4677-831d-3e42c54834a3-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:06 crc kubenswrapper[4796]: I1202 20:42:06.039191 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" event={"ID":"2657dcbd-334e-4677-831d-3e42c54834a3","Type":"ContainerDied","Data":"89817f18ad2a29c1e835501030b8ef8ecb3435f91e369fd61906688f98fc417d"} Dec 02 20:42:06 crc kubenswrapper[4796]: I1202 20:42:06.039288 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89817f18ad2a29c1e835501030b8ef8ecb3435f91e369fd61906688f98fc417d" Dec 02 20:42:06 crc kubenswrapper[4796]: I1202 20:42:06.039372 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7" Dec 02 20:42:06 crc kubenswrapper[4796]: I1202 20:42:06.265653 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:42:06 crc kubenswrapper[4796]: E1202 20:42:06.266044 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.616635 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt"] Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.625056 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7"] Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.635240 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4vnvt"] Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.644037 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29411802-wpqw7"] Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.660427 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-4z8cf"] Dec 02 20:42:07 crc kubenswrapper[4796]: E1202 20:42:07.660908 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2657dcbd-334e-4677-831d-3e42c54834a3" containerName="watcher-db-manage" Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.660933 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2657dcbd-334e-4677-831d-3e42c54834a3" containerName="watcher-db-manage" Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.661126 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2657dcbd-334e-4677-831d-3e42c54834a3" containerName="watcher-db-manage" Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.661946 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.673130 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-4z8cf"] Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.684201 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.684470 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="067c599c-9b7a-4db5-a6a9-62f520e4b67f" containerName="watcher-decision-engine" containerID="cri-o://727b9b0f2943774efb0e489a034cdd1a2b4c436ad9716930deba4a40fadc4716" gracePeriod=30 Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.723435 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.723666 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="abb95a24-b20f-4f5d-b988-f994a63683d7" containerName="watcher-kuttl-api-log" containerID="cri-o://2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4" gracePeriod=30 Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.724046 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="abb95a24-b20f-4f5d-b988-f994a63683d7" containerName="watcher-api" containerID="cri-o://adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281" gracePeriod=30 Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.732964 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.733181 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="fe178b48-56b0-493c-a133-ca8dacce432b" containerName="watcher-kuttl-api-log" containerID="cri-o://069381eaf59b25cd25ce5951d60e340c488bda63620937e27b5ae3610b9d2d27" gracePeriod=30 Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.733633 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="fe178b48-56b0-493c-a133-ca8dacce432b" containerName="watcher-api" containerID="cri-o://ea5eca9a5fdcc7c5ef64dd0b247ca2e9d387d618a026623ba7f7014f153f2ce4" gracePeriod=30 Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.755445 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.755687 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="6f5576a7-0257-4785-b47e-b6990f016c51" containerName="watcher-applier" containerID="cri-o://ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed" gracePeriod=30 Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.815536 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtdsd\" (UniqueName: \"kubernetes.io/projected/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1-kube-api-access-vtdsd\") pod \"watchertest-account-delete-4z8cf\" (UID: \"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1\") " pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.816352 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1-operator-scripts\") pod \"watchertest-account-delete-4z8cf\" (UID: \"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1\") " pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.917447 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1-operator-scripts\") pod \"watchertest-account-delete-4z8cf\" (UID: \"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1\") " pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.917515 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtdsd\" (UniqueName: \"kubernetes.io/projected/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1-kube-api-access-vtdsd\") pod \"watchertest-account-delete-4z8cf\" (UID: \"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1\") " pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.918121 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1-operator-scripts\") pod \"watchertest-account-delete-4z8cf\" (UID: \"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1\") " pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.935980 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtdsd\" (UniqueName: \"kubernetes.io/projected/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1-kube-api-access-vtdsd\") pod \"watchertest-account-delete-4z8cf\" (UID: \"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1\") " pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" Dec 02 20:42:07 crc kubenswrapper[4796]: I1202 20:42:07.991094 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" Dec 02 20:42:08 crc kubenswrapper[4796]: I1202 20:42:08.060950 4796 generic.go:334] "Generic (PLEG): container finished" podID="abb95a24-b20f-4f5d-b988-f994a63683d7" containerID="2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4" exitCode=143 Dec 02 20:42:08 crc kubenswrapper[4796]: I1202 20:42:08.061012 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"abb95a24-b20f-4f5d-b988-f994a63683d7","Type":"ContainerDied","Data":"2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4"} Dec 02 20:42:08 crc kubenswrapper[4796]: I1202 20:42:08.064141 4796 generic.go:334] "Generic (PLEG): container finished" podID="fe178b48-56b0-493c-a133-ca8dacce432b" containerID="069381eaf59b25cd25ce5951d60e340c488bda63620937e27b5ae3610b9d2d27" exitCode=143 Dec 02 20:42:08 crc kubenswrapper[4796]: I1202 20:42:08.064178 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fe178b48-56b0-493c-a133-ca8dacce432b","Type":"ContainerDied","Data":"069381eaf59b25cd25ce5951d60e340c488bda63620937e27b5ae3610b9d2d27"} Dec 02 20:42:08 crc kubenswrapper[4796]: I1202 20:42:08.454481 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-4z8cf"] Dec 02 20:42:08 crc kubenswrapper[4796]: W1202 20:42:08.456474 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0add6e59_6e90_4a8d_ad2f_5a8e6691f9e1.slice/crio-9bcdef7c1ec50e6ebb476046f57bceb2a97c99fb9e07c92db901fb90e15ae21b WatchSource:0}: Error finding container 9bcdef7c1ec50e6ebb476046f57bceb2a97c99fb9e07c92db901fb90e15ae21b: Status 404 returned error can't find the container with id 9bcdef7c1ec50e6ebb476046f57bceb2a97c99fb9e07c92db901fb90e15ae21b Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.007371 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.078921 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" event={"ID":"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1","Type":"ContainerStarted","Data":"f341b1f62ab5a2a27b153bc2905485e78832530b025c3dd18a5ba677d14c86ec"} Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.079164 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" event={"ID":"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1","Type":"ContainerStarted","Data":"9bcdef7c1ec50e6ebb476046f57bceb2a97c99fb9e07c92db901fb90e15ae21b"} Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.082576 4796 generic.go:334] "Generic (PLEG): container finished" podID="abb95a24-b20f-4f5d-b988-f994a63683d7" containerID="adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281" exitCode=0 Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.082781 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.083035 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"abb95a24-b20f-4f5d-b988-f994a63683d7","Type":"ContainerDied","Data":"adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281"} Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.083093 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"abb95a24-b20f-4f5d-b988-f994a63683d7","Type":"ContainerDied","Data":"121f62730792f4171f76b8e0e5cbecf2ce95d8a984919f9a2316bbc5440c2230"} Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.083124 4796 scope.go:117] "RemoveContainer" containerID="adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.094304 4796 generic.go:334] "Generic (PLEG): container finished" podID="fe178b48-56b0-493c-a133-ca8dacce432b" containerID="ea5eca9a5fdcc7c5ef64dd0b247ca2e9d387d618a026623ba7f7014f153f2ce4" exitCode=0 Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.094340 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fe178b48-56b0-493c-a133-ca8dacce432b","Type":"ContainerDied","Data":"ea5eca9a5fdcc7c5ef64dd0b247ca2e9d387d618a026623ba7f7014f153f2ce4"} Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.102954 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" podStartSLOduration=2.102927319 podStartE2EDuration="2.102927319s" podCreationTimestamp="2025-12-02 20:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 20:42:09.094350093 +0000 UTC m=+1812.097725627" watchObservedRunningTime="2025-12-02 20:42:09.102927319 +0000 UTC m=+1812.106302853" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.157238 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-cert-memcached-mtls\") pod \"abb95a24-b20f-4f5d-b988-f994a63683d7\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.157307 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-custom-prometheus-ca\") pod \"abb95a24-b20f-4f5d-b988-f994a63683d7\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.157374 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abb95a24-b20f-4f5d-b988-f994a63683d7-logs\") pod \"abb95a24-b20f-4f5d-b988-f994a63683d7\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.157429 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-combined-ca-bundle\") pod \"abb95a24-b20f-4f5d-b988-f994a63683d7\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.157491 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smghl\" (UniqueName: \"kubernetes.io/projected/abb95a24-b20f-4f5d-b988-f994a63683d7-kube-api-access-smghl\") pod \"abb95a24-b20f-4f5d-b988-f994a63683d7\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.157737 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-config-data\") pod \"abb95a24-b20f-4f5d-b988-f994a63683d7\" (UID: \"abb95a24-b20f-4f5d-b988-f994a63683d7\") " Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.158246 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abb95a24-b20f-4f5d-b988-f994a63683d7-logs" (OuterVolumeSpecName: "logs") pod "abb95a24-b20f-4f5d-b988-f994a63683d7" (UID: "abb95a24-b20f-4f5d-b988-f994a63683d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.158627 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abb95a24-b20f-4f5d-b988-f994a63683d7-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.163425 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb95a24-b20f-4f5d-b988-f994a63683d7-kube-api-access-smghl" (OuterVolumeSpecName: "kube-api-access-smghl") pod "abb95a24-b20f-4f5d-b988-f994a63683d7" (UID: "abb95a24-b20f-4f5d-b988-f994a63683d7"). InnerVolumeSpecName "kube-api-access-smghl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.169597 4796 scope.go:117] "RemoveContainer" containerID="2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.226943 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-config-data" (OuterVolumeSpecName: "config-data") pod "abb95a24-b20f-4f5d-b988-f994a63683d7" (UID: "abb95a24-b20f-4f5d-b988-f994a63683d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.248377 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abb95a24-b20f-4f5d-b988-f994a63683d7" (UID: "abb95a24-b20f-4f5d-b988-f994a63683d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.250743 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "abb95a24-b20f-4f5d-b988-f994a63683d7" (UID: "abb95a24-b20f-4f5d-b988-f994a63683d7"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.257432 4796 scope.go:117] "RemoveContainer" containerID="adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281" Dec 02 20:42:09 crc kubenswrapper[4796]: E1202 20:42:09.258180 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281\": container with ID starting with adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281 not found: ID does not exist" containerID="adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.258788 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281"} err="failed to get container status \"adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281\": rpc error: code = NotFound desc = could not find container \"adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281\": container with ID starting with adad9541234634e9ec7f1b6b15be94a80d0721da319b16ea54096687a07d3281 not found: ID does not exist" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.258874 4796 scope.go:117] "RemoveContainer" containerID="2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.258763 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:42:09 crc kubenswrapper[4796]: E1202 20:42:09.259705 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4\": container with ID starting with 2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4 not found: ID does not exist" containerID="2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.259727 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.259743 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smghl\" (UniqueName: \"kubernetes.io/projected/abb95a24-b20f-4f5d-b988-f994a63683d7-kube-api-access-smghl\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.259753 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.259746 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4"} err="failed to get container status \"2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4\": rpc error: code = NotFound desc = could not find container \"2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4\": container with ID starting with 2374729028b8771660bb3ba0bbdb9ccc1e19e0a2bf77a1d2264ab6edfcdbc4c4 not found: ID does not exist" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.259762 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.263338 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "abb95a24-b20f-4f5d-b988-f994a63683d7" (UID: "abb95a24-b20f-4f5d-b988-f994a63683d7"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.280839 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2657dcbd-334e-4677-831d-3e42c54834a3" path="/var/lib/kubelet/pods/2657dcbd-334e-4677-831d-3e42c54834a3/volumes" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.281577 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="858d71d0-255f-4035-8ca7-3cb9a2b70ced" path="/var/lib/kubelet/pods/858d71d0-255f-4035-8ca7-3cb9a2b70ced/volumes" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.361143 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvgkm\" (UniqueName: \"kubernetes.io/projected/fe178b48-56b0-493c-a133-ca8dacce432b-kube-api-access-zvgkm\") pod \"fe178b48-56b0-493c-a133-ca8dacce432b\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.361293 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-config-data\") pod \"fe178b48-56b0-493c-a133-ca8dacce432b\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.361331 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-custom-prometheus-ca\") pod \"fe178b48-56b0-493c-a133-ca8dacce432b\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.361363 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-cert-memcached-mtls\") pod \"fe178b48-56b0-493c-a133-ca8dacce432b\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.361401 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-combined-ca-bundle\") pod \"fe178b48-56b0-493c-a133-ca8dacce432b\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.361464 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe178b48-56b0-493c-a133-ca8dacce432b-logs\") pod \"fe178b48-56b0-493c-a133-ca8dacce432b\" (UID: \"fe178b48-56b0-493c-a133-ca8dacce432b\") " Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.361927 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/abb95a24-b20f-4f5d-b988-f994a63683d7-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.361986 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe178b48-56b0-493c-a133-ca8dacce432b-logs" (OuterVolumeSpecName: "logs") pod "fe178b48-56b0-493c-a133-ca8dacce432b" (UID: "fe178b48-56b0-493c-a133-ca8dacce432b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.364331 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe178b48-56b0-493c-a133-ca8dacce432b-kube-api-access-zvgkm" (OuterVolumeSpecName: "kube-api-access-zvgkm") pod "fe178b48-56b0-493c-a133-ca8dacce432b" (UID: "fe178b48-56b0-493c-a133-ca8dacce432b"). InnerVolumeSpecName "kube-api-access-zvgkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.381852 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe178b48-56b0-493c-a133-ca8dacce432b" (UID: "fe178b48-56b0-493c-a133-ca8dacce432b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.389007 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "fe178b48-56b0-493c-a133-ca8dacce432b" (UID: "fe178b48-56b0-493c-a133-ca8dacce432b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.404076 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.411483 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-config-data" (OuterVolumeSpecName: "config-data") pod "fe178b48-56b0-493c-a133-ca8dacce432b" (UID: "fe178b48-56b0-493c-a133-ca8dacce432b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.413658 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.446483 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "fe178b48-56b0-493c-a133-ca8dacce432b" (UID: "fe178b48-56b0-493c-a133-ca8dacce432b"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.463490 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.463522 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.463532 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.463544 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe178b48-56b0-493c-a133-ca8dacce432b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.463553 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe178b48-56b0-493c-a133-ca8dacce432b-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:09 crc kubenswrapper[4796]: I1202 20:42:09.463564 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvgkm\" (UniqueName: \"kubernetes.io/projected/fe178b48-56b0-493c-a133-ca8dacce432b-kube-api-access-zvgkm\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.107117 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fe178b48-56b0-493c-a133-ca8dacce432b","Type":"ContainerDied","Data":"12cb267383263f397875385fed994fb65c35e7b6f23122ab919a12799c4e7a35"} Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.107196 4796 scope.go:117] "RemoveContainer" containerID="ea5eca9a5fdcc7c5ef64dd0b247ca2e9d387d618a026623ba7f7014f153f2ce4" Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.107285 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.116049 4796 generic.go:334] "Generic (PLEG): container finished" podID="0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1" containerID="f341b1f62ab5a2a27b153bc2905485e78832530b025c3dd18a5ba677d14c86ec" exitCode=0 Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.116114 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" event={"ID":"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1","Type":"ContainerDied","Data":"f341b1f62ab5a2a27b153bc2905485e78832530b025c3dd18a5ba677d14c86ec"} Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.127634 4796 scope.go:117] "RemoveContainer" containerID="069381eaf59b25cd25ce5951d60e340c488bda63620937e27b5ae3610b9d2d27" Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.160729 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.166302 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.274023 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.274372 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="ceilometer-central-agent" containerID="cri-o://b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7" gracePeriod=30 Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.274438 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="sg-core" containerID="cri-o://7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f" gracePeriod=30 Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.274482 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="ceilometer-notification-agent" containerID="cri-o://9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc" gracePeriod=30 Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.274518 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="proxy-httpd" containerID="cri-o://88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659" gracePeriod=30 Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.280219 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.225:3000/\": EOF" Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.817375 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.883624 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-config-data\") pod \"6f5576a7-0257-4785-b47e-b6990f016c51\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.883689 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc2x2\" (UniqueName: \"kubernetes.io/projected/6f5576a7-0257-4785-b47e-b6990f016c51-kube-api-access-xc2x2\") pod \"6f5576a7-0257-4785-b47e-b6990f016c51\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.883838 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5576a7-0257-4785-b47e-b6990f016c51-logs\") pod \"6f5576a7-0257-4785-b47e-b6990f016c51\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.884357 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5576a7-0257-4785-b47e-b6990f016c51-logs" (OuterVolumeSpecName: "logs") pod "6f5576a7-0257-4785-b47e-b6990f016c51" (UID: "6f5576a7-0257-4785-b47e-b6990f016c51"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.890392 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5576a7-0257-4785-b47e-b6990f016c51-kube-api-access-xc2x2" (OuterVolumeSpecName: "kube-api-access-xc2x2") pod "6f5576a7-0257-4785-b47e-b6990f016c51" (UID: "6f5576a7-0257-4785-b47e-b6990f016c51"). InnerVolumeSpecName "kube-api-access-xc2x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.942407 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-config-data" (OuterVolumeSpecName: "config-data") pod "6f5576a7-0257-4785-b47e-b6990f016c51" (UID: "6f5576a7-0257-4785-b47e-b6990f016c51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.985157 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-combined-ca-bundle\") pod \"6f5576a7-0257-4785-b47e-b6990f016c51\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.985596 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-cert-memcached-mtls\") pod \"6f5576a7-0257-4785-b47e-b6990f016c51\" (UID: \"6f5576a7-0257-4785-b47e-b6990f016c51\") " Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.986056 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.986081 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc2x2\" (UniqueName: \"kubernetes.io/projected/6f5576a7-0257-4785-b47e-b6990f016c51-kube-api-access-xc2x2\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:10 crc kubenswrapper[4796]: I1202 20:42:10.986094 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5576a7-0257-4785-b47e-b6990f016c51-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.020793 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f5576a7-0257-4785-b47e-b6990f016c51" (UID: "6f5576a7-0257-4785-b47e-b6990f016c51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.070852 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "6f5576a7-0257-4785-b47e-b6990f016c51" (UID: "6f5576a7-0257-4785-b47e-b6990f016c51"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.087369 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.087405 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5576a7-0257-4785-b47e-b6990f016c51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.147143 4796 generic.go:334] "Generic (PLEG): container finished" podID="6f5576a7-0257-4785-b47e-b6990f016c51" containerID="ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed" exitCode=0 Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.147365 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"6f5576a7-0257-4785-b47e-b6990f016c51","Type":"ContainerDied","Data":"ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed"} Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.147444 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"6f5576a7-0257-4785-b47e-b6990f016c51","Type":"ContainerDied","Data":"f1d823580a92deff2556e6191cb823b289b58fcbb9860869d300ccf4ce03d2b4"} Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.147579 4796 scope.go:117] "RemoveContainer" containerID="ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.147754 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.164387 4796 generic.go:334] "Generic (PLEG): container finished" podID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerID="88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659" exitCode=0 Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.164420 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68c7dbaf-3568-4e03-ba64-c0aafac02efd","Type":"ContainerDied","Data":"88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659"} Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.164457 4796 generic.go:334] "Generic (PLEG): container finished" podID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerID="7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f" exitCode=2 Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.164464 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68c7dbaf-3568-4e03-ba64-c0aafac02efd","Type":"ContainerDied","Data":"7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f"} Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.164469 4796 generic.go:334] "Generic (PLEG): container finished" podID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerID="b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7" exitCode=0 Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.164476 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68c7dbaf-3568-4e03-ba64-c0aafac02efd","Type":"ContainerDied","Data":"b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7"} Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.167070 4796 generic.go:334] "Generic (PLEG): container finished" podID="067c599c-9b7a-4db5-a6a9-62f520e4b67f" containerID="727b9b0f2943774efb0e489a034cdd1a2b4c436ad9716930deba4a40fadc4716" exitCode=0 Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.167368 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"067c599c-9b7a-4db5-a6a9-62f520e4b67f","Type":"ContainerDied","Data":"727b9b0f2943774efb0e489a034cdd1a2b4c436ad9716930deba4a40fadc4716"} Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.202761 4796 scope.go:117] "RemoveContainer" containerID="ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed" Dec 02 20:42:11 crc kubenswrapper[4796]: E1202 20:42:11.206838 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed\": container with ID starting with ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed not found: ID does not exist" containerID="ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.206875 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed"} err="failed to get container status \"ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed\": rpc error: code = NotFound desc = could not find container \"ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed\": container with ID starting with ef5b5bd27feb364a3f6145b81c91b3a71608b74307b64471317b67bda0ae10ed not found: ID does not exist" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.209777 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.223015 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.273824 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5576a7-0257-4785-b47e-b6990f016c51" path="/var/lib/kubelet/pods/6f5576a7-0257-4785-b47e-b6990f016c51/volumes" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.274491 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abb95a24-b20f-4f5d-b988-f994a63683d7" path="/var/lib/kubelet/pods/abb95a24-b20f-4f5d-b988-f994a63683d7/volumes" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.275242 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe178b48-56b0-493c-a133-ca8dacce432b" path="/var/lib/kubelet/pods/fe178b48-56b0-493c-a133-ca8dacce432b/volumes" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.402342 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.494567 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-combined-ca-bundle\") pod \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.494724 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-config-data\") pod \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.494770 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-cert-memcached-mtls\") pod \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.494816 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-custom-prometheus-ca\") pod \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.494886 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/067c599c-9b7a-4db5-a6a9-62f520e4b67f-logs\") pod \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.494912 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmhxm\" (UniqueName: \"kubernetes.io/projected/067c599c-9b7a-4db5-a6a9-62f520e4b67f-kube-api-access-pmhxm\") pod \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\" (UID: \"067c599c-9b7a-4db5-a6a9-62f520e4b67f\") " Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.497417 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067c599c-9b7a-4db5-a6a9-62f520e4b67f-logs" (OuterVolumeSpecName: "logs") pod "067c599c-9b7a-4db5-a6a9-62f520e4b67f" (UID: "067c599c-9b7a-4db5-a6a9-62f520e4b67f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.499875 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067c599c-9b7a-4db5-a6a9-62f520e4b67f-kube-api-access-pmhxm" (OuterVolumeSpecName: "kube-api-access-pmhxm") pod "067c599c-9b7a-4db5-a6a9-62f520e4b67f" (UID: "067c599c-9b7a-4db5-a6a9-62f520e4b67f"). InnerVolumeSpecName "kube-api-access-pmhxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.506316 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.519636 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "067c599c-9b7a-4db5-a6a9-62f520e4b67f" (UID: "067c599c-9b7a-4db5-a6a9-62f520e4b67f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.546988 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-config-data" (OuterVolumeSpecName: "config-data") pod "067c599c-9b7a-4db5-a6a9-62f520e4b67f" (UID: "067c599c-9b7a-4db5-a6a9-62f520e4b67f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.548689 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "067c599c-9b7a-4db5-a6a9-62f520e4b67f" (UID: "067c599c-9b7a-4db5-a6a9-62f520e4b67f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.574042 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "067c599c-9b7a-4db5-a6a9-62f520e4b67f" (UID: "067c599c-9b7a-4db5-a6a9-62f520e4b67f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.596488 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1-operator-scripts\") pod \"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1\" (UID: \"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1\") " Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.596597 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtdsd\" (UniqueName: \"kubernetes.io/projected/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1-kube-api-access-vtdsd\") pod \"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1\" (UID: \"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1\") " Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.597139 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/067c599c-9b7a-4db5-a6a9-62f520e4b67f-logs\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.597156 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmhxm\" (UniqueName: \"kubernetes.io/projected/067c599c-9b7a-4db5-a6a9-62f520e4b67f-kube-api-access-pmhxm\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.597169 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.597178 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.597187 4796 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.597195 4796 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/067c599c-9b7a-4db5-a6a9-62f520e4b67f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.597210 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1" (UID: "0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.599524 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1-kube-api-access-vtdsd" (OuterVolumeSpecName: "kube-api-access-vtdsd") pod "0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1" (UID: "0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1"). InnerVolumeSpecName "kube-api-access-vtdsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.699394 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:11 crc kubenswrapper[4796]: I1202 20:42:11.699451 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtdsd\" (UniqueName: \"kubernetes.io/projected/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1-kube-api-access-vtdsd\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.178300 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"067c599c-9b7a-4db5-a6a9-62f520e4b67f","Type":"ContainerDied","Data":"a218ae6bbfc26f0e13530de6eac78991d463e92a5231e410cf0a6b6bef1f93f0"} Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.178330 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.178371 4796 scope.go:117] "RemoveContainer" containerID="727b9b0f2943774efb0e489a034cdd1a2b4c436ad9716930deba4a40fadc4716" Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.181955 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" event={"ID":"0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1","Type":"ContainerDied","Data":"9bcdef7c1ec50e6ebb476046f57bceb2a97c99fb9e07c92db901fb90e15ae21b"} Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.182005 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bcdef7c1ec50e6ebb476046f57bceb2a97c99fb9e07c92db901fb90e15ae21b" Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.182015 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-4z8cf" Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.232562 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.239199 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.676532 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-9qxxc"] Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.687225 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-9qxxc"] Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.706907 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-xmjt8"] Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.715891 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-xmjt8"] Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.724265 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-4z8cf"] Dec 02 20:42:12 crc kubenswrapper[4796]: I1202 20:42:12.730199 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-4z8cf"] Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.274534 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="067c599c-9b7a-4db5-a6a9-62f520e4b67f" path="/var/lib/kubelet/pods/067c599c-9b7a-4db5-a6a9-62f520e4b67f/volumes" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.275033 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1" path="/var/lib/kubelet/pods/0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1/volumes" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.275537 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3645ebb8-9035-4992-aeef-90d8c16a70ac" path="/var/lib/kubelet/pods/3645ebb8-9035-4992-aeef-90d8c16a70ac/volumes" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.276545 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551fb351-61fa-4dd6-80a1-663a21ce01a9" path="/var/lib/kubelet/pods/551fb351-61fa-4dd6-80a1-663a21ce01a9/volumes" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.664353 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.742991 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c7dbaf-3568-4e03-ba64-c0aafac02efd-run-httpd\") pod \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.743089 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-sg-core-conf-yaml\") pod \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.743141 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-config-data\") pod \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.743175 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-scripts\") pod \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.743232 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cblrx\" (UniqueName: \"kubernetes.io/projected/68c7dbaf-3568-4e03-ba64-c0aafac02efd-kube-api-access-cblrx\") pod \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.743313 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-ceilometer-tls-certs\") pod \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.743350 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c7dbaf-3568-4e03-ba64-c0aafac02efd-log-httpd\") pod \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.743400 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-combined-ca-bundle\") pod \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\" (UID: \"68c7dbaf-3568-4e03-ba64-c0aafac02efd\") " Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.749469 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-scripts" (OuterVolumeSpecName: "scripts") pod "68c7dbaf-3568-4e03-ba64-c0aafac02efd" (UID: "68c7dbaf-3568-4e03-ba64-c0aafac02efd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.749763 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68c7dbaf-3568-4e03-ba64-c0aafac02efd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "68c7dbaf-3568-4e03-ba64-c0aafac02efd" (UID: "68c7dbaf-3568-4e03-ba64-c0aafac02efd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.758359 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68c7dbaf-3568-4e03-ba64-c0aafac02efd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "68c7dbaf-3568-4e03-ba64-c0aafac02efd" (UID: "68c7dbaf-3568-4e03-ba64-c0aafac02efd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.761885 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c7dbaf-3568-4e03-ba64-c0aafac02efd-kube-api-access-cblrx" (OuterVolumeSpecName: "kube-api-access-cblrx") pod "68c7dbaf-3568-4e03-ba64-c0aafac02efd" (UID: "68c7dbaf-3568-4e03-ba64-c0aafac02efd"). InnerVolumeSpecName "kube-api-access-cblrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.824481 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "68c7dbaf-3568-4e03-ba64-c0aafac02efd" (UID: "68c7dbaf-3568-4e03-ba64-c0aafac02efd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.825426 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68c7dbaf-3568-4e03-ba64-c0aafac02efd" (UID: "68c7dbaf-3568-4e03-ba64-c0aafac02efd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.834444 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "68c7dbaf-3568-4e03-ba64-c0aafac02efd" (UID: "68c7dbaf-3568-4e03-ba64-c0aafac02efd"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.845190 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.845394 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.845468 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cblrx\" (UniqueName: \"kubernetes.io/projected/68c7dbaf-3568-4e03-ba64-c0aafac02efd-kube-api-access-cblrx\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.845532 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.845586 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c7dbaf-3568-4e03-ba64-c0aafac02efd-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.845642 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.845705 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c7dbaf-3568-4e03-ba64-c0aafac02efd-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.845736 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-config-data" (OuterVolumeSpecName: "config-data") pod "68c7dbaf-3568-4e03-ba64-c0aafac02efd" (UID: "68c7dbaf-3568-4e03-ba64-c0aafac02efd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:42:13 crc kubenswrapper[4796]: I1202 20:42:13.946854 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c7dbaf-3568-4e03-ba64-c0aafac02efd-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 20:42:13 crc kubenswrapper[4796]: E1202 20:42:13.953840 4796 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.241:35188->38.102.83.241:34197: write tcp 38.102.83.241:35188->38.102.83.241:34197: write: broken pipe Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.211127 4796 generic.go:334] "Generic (PLEG): container finished" podID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerID="9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc" exitCode=0 Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.211180 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68c7dbaf-3568-4e03-ba64-c0aafac02efd","Type":"ContainerDied","Data":"9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc"} Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.211212 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68c7dbaf-3568-4e03-ba64-c0aafac02efd","Type":"ContainerDied","Data":"a1c5016017933d72452ca9a0e71e5f948c8de62c48870611be8bcacc0202636d"} Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.211216 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.211235 4796 scope.go:117] "RemoveContainer" containerID="88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.251177 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.259987 4796 scope.go:117] "RemoveContainer" containerID="7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.267314 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.287309 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.296727 4796 scope.go:117] "RemoveContainer" containerID="9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.296901 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb95a24-b20f-4f5d-b988-f994a63683d7" containerName="watcher-api" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.296929 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb95a24-b20f-4f5d-b988-f994a63683d7" containerName="watcher-api" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.296959 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5576a7-0257-4785-b47e-b6990f016c51" containerName="watcher-applier" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.296967 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5576a7-0257-4785-b47e-b6990f016c51" containerName="watcher-applier" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.296990 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="ceilometer-notification-agent" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.296998 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="ceilometer-notification-agent" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.297005 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="sg-core" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297011 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="sg-core" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.297020 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe178b48-56b0-493c-a133-ca8dacce432b" containerName="watcher-kuttl-api-log" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297044 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe178b48-56b0-493c-a133-ca8dacce432b" containerName="watcher-kuttl-api-log" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.297055 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067c599c-9b7a-4db5-a6a9-62f520e4b67f" containerName="watcher-decision-engine" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297062 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="067c599c-9b7a-4db5-a6a9-62f520e4b67f" containerName="watcher-decision-engine" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.297072 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="proxy-httpd" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297078 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="proxy-httpd" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.297088 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1" containerName="mariadb-account-delete" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297095 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1" containerName="mariadb-account-delete" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.297101 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb95a24-b20f-4f5d-b988-f994a63683d7" containerName="watcher-kuttl-api-log" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297107 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb95a24-b20f-4f5d-b988-f994a63683d7" containerName="watcher-kuttl-api-log" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.297119 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="ceilometer-central-agent" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297125 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="ceilometer-central-agent" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.297140 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe178b48-56b0-493c-a133-ca8dacce432b" containerName="watcher-api" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297148 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe178b48-56b0-493c-a133-ca8dacce432b" containerName="watcher-api" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297451 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe178b48-56b0-493c-a133-ca8dacce432b" containerName="watcher-api" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297469 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb95a24-b20f-4f5d-b988-f994a63683d7" containerName="watcher-api" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297478 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe178b48-56b0-493c-a133-ca8dacce432b" containerName="watcher-kuttl-api-log" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297527 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="proxy-httpd" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297538 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="sg-core" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297548 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0add6e59-6e90-4a8d-ad2f-5a8e6691f9e1" containerName="mariadb-account-delete" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297557 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="067c599c-9b7a-4db5-a6a9-62f520e4b67f" containerName="watcher-decision-engine" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297569 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5576a7-0257-4785-b47e-b6990f016c51" containerName="watcher-applier" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297576 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb95a24-b20f-4f5d-b988-f994a63683d7" containerName="watcher-kuttl-api-log" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297603 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="ceilometer-notification-agent" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.297613 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" containerName="ceilometer-central-agent" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.299101 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.299192 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.302160 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.302969 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.303048 4796 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.353805 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-config-data\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.353858 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.353882 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.353916 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49738d-5976-4610-af11-097c8f69f94d-run-httpd\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.353937 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-scripts\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.353999 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lszq\" (UniqueName: \"kubernetes.io/projected/6a49738d-5976-4610-af11-097c8f69f94d-kube-api-access-9lszq\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.354088 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.359760 4796 scope.go:117] "RemoveContainer" containerID="b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.359819 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49738d-5976-4610-af11-097c8f69f94d-log-httpd\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.378968 4796 scope.go:117] "RemoveContainer" containerID="88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.379381 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659\": container with ID starting with 88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659 not found: ID does not exist" containerID="88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.379431 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659"} err="failed to get container status \"88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659\": rpc error: code = NotFound desc = could not find container \"88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659\": container with ID starting with 88846f6868acbd70f11460e4a447156a2052e0798dcc123b91bb60080b2ad659 not found: ID does not exist" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.379461 4796 scope.go:117] "RemoveContainer" containerID="7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.379921 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f\": container with ID starting with 7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f not found: ID does not exist" containerID="7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.379952 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f"} err="failed to get container status \"7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f\": rpc error: code = NotFound desc = could not find container \"7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f\": container with ID starting with 7150d1cde44053b29f905e304c51d05b9f4e2bbe4923075c21b5a41c919b0c3f not found: ID does not exist" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.379969 4796 scope.go:117] "RemoveContainer" containerID="9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.380211 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc\": container with ID starting with 9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc not found: ID does not exist" containerID="9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.380235 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc"} err="failed to get container status \"9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc\": rpc error: code = NotFound desc = could not find container \"9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc\": container with ID starting with 9afb9bd9ee0d90898b6453f7145d9d1c84618135ba5e17267a725b63baff88bc not found: ID does not exist" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.380272 4796 scope.go:117] "RemoveContainer" containerID="b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7" Dec 02 20:42:14 crc kubenswrapper[4796]: E1202 20:42:14.380556 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7\": container with ID starting with b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7 not found: ID does not exist" containerID="b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.380586 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7"} err="failed to get container status \"b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7\": rpc error: code = NotFound desc = could not find container \"b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7\": container with ID starting with b28e60c8625c0d2f87c94bf8aa16c6074cd0eba1d050c94166f8d9dc37cc1df7 not found: ID does not exist" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.461131 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49738d-5976-4610-af11-097c8f69f94d-log-httpd\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.461893 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-config-data\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.462069 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.462221 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.462416 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49738d-5976-4610-af11-097c8f69f94d-run-httpd\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.462553 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-scripts\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.462754 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lszq\" (UniqueName: \"kubernetes.io/projected/6a49738d-5976-4610-af11-097c8f69f94d-kube-api-access-9lszq\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.462928 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.461901 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49738d-5976-4610-af11-097c8f69f94d-log-httpd\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.464015 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49738d-5976-4610-af11-097c8f69f94d-run-httpd\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.466619 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.467792 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-scripts\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.468664 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.480043 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.484634 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a49738d-5976-4610-af11-097c8f69f94d-config-data\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.493833 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lszq\" (UniqueName: \"kubernetes.io/projected/6a49738d-5976-4610-af11-097c8f69f94d-kube-api-access-9lszq\") pod \"ceilometer-0\" (UID: \"6a49738d-5976-4610-af11-097c8f69f94d\") " pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:14 crc kubenswrapper[4796]: I1202 20:42:14.655214 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:15 crc kubenswrapper[4796]: I1202 20:42:15.015344 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Dec 02 20:42:15 crc kubenswrapper[4796]: I1202 20:42:15.222011 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6a49738d-5976-4610-af11-097c8f69f94d","Type":"ContainerStarted","Data":"2a4b92699dd4d86a078036aac0c6c3d5b83b796c4615efc48031c20fb928698a"} Dec 02 20:42:15 crc kubenswrapper[4796]: I1202 20:42:15.282843 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c7dbaf-3568-4e03-ba64-c0aafac02efd" path="/var/lib/kubelet/pods/68c7dbaf-3568-4e03-ba64-c0aafac02efd/volumes" Dec 02 20:42:16 crc kubenswrapper[4796]: I1202 20:42:16.234083 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6a49738d-5976-4610-af11-097c8f69f94d","Type":"ContainerStarted","Data":"08f90509fd2feda9d1f3c0a6175325f45f50cdbddb169ab65a154fe05b2b6b0f"} Dec 02 20:42:17 crc kubenswrapper[4796]: I1202 20:42:17.250774 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6a49738d-5976-4610-af11-097c8f69f94d","Type":"ContainerStarted","Data":"bb38c1a4010530eb6af5a764348725a7c1bcd836bf664adfb0343aecc3a887ac"} Dec 02 20:42:17 crc kubenswrapper[4796]: I1202 20:42:17.271076 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:42:17 crc kubenswrapper[4796]: E1202 20:42:17.271779 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:42:18 crc kubenswrapper[4796]: I1202 20:42:18.264137 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6a49738d-5976-4610-af11-097c8f69f94d","Type":"ContainerStarted","Data":"d186606765bb78034996a0d6308092c34ad9dc3d38fd7cd9bf3f442219b39b51"} Dec 02 20:42:19 crc kubenswrapper[4796]: I1202 20:42:19.279651 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6a49738d-5976-4610-af11-097c8f69f94d","Type":"ContainerStarted","Data":"dbab160cb39debf248f92a77ecfc5e8a486599647c0dbdb0275847aec070d2e3"} Dec 02 20:42:19 crc kubenswrapper[4796]: I1202 20:42:19.281418 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:19 crc kubenswrapper[4796]: I1202 20:42:19.309192 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.889805683 podStartE2EDuration="5.309166149s" podCreationTimestamp="2025-12-02 20:42:14 +0000 UTC" firstStartedPulling="2025-12-02 20:42:15.022026261 +0000 UTC m=+1818.025401795" lastFinishedPulling="2025-12-02 20:42:18.441386727 +0000 UTC m=+1821.444762261" observedRunningTime="2025-12-02 20:42:19.305036649 +0000 UTC m=+1822.308412203" watchObservedRunningTime="2025-12-02 20:42:19.309166149 +0000 UTC m=+1822.312541693" Dec 02 20:42:28 crc kubenswrapper[4796]: I1202 20:42:28.258695 4796 scope.go:117] "RemoveContainer" containerID="a3d755ec7e2403ce841049ef585b88f6cbc4908275411093c0e9befccee621e7" Dec 02 20:42:28 crc kubenswrapper[4796]: I1202 20:42:28.319719 4796 scope.go:117] "RemoveContainer" containerID="914dcb19a1c7df55747d4d9a8640809ad767d23ef85d05944ddc8a4501658aa3" Dec 02 20:42:28 crc kubenswrapper[4796]: I1202 20:42:28.342423 4796 scope.go:117] "RemoveContainer" containerID="d087025674c71fedb8e07e48cc3f4e950071f11f9dea1e33bab7b3739bb1585b" Dec 02 20:42:28 crc kubenswrapper[4796]: I1202 20:42:28.391193 4796 scope.go:117] "RemoveContainer" containerID="7d5e160776ab02ecd68df753deb5b629277c3a4965ad23fd4851b84b7255ea64" Dec 02 20:42:32 crc kubenswrapper[4796]: I1202 20:42:32.264813 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:42:32 crc kubenswrapper[4796]: E1202 20:42:32.265919 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:42:43 crc kubenswrapper[4796]: I1202 20:42:43.264678 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:42:43 crc kubenswrapper[4796]: E1202 20:42:43.265318 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:42:44 crc kubenswrapper[4796]: I1202 20:42:44.678658 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Dec 02 20:42:46 crc kubenswrapper[4796]: I1202 20:42:46.259767 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xz7jb/must-gather-pnp66"] Dec 02 20:42:46 crc kubenswrapper[4796]: I1202 20:42:46.261133 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xz7jb/must-gather-pnp66" Dec 02 20:42:46 crc kubenswrapper[4796]: I1202 20:42:46.264473 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xz7jb"/"openshift-service-ca.crt" Dec 02 20:42:46 crc kubenswrapper[4796]: I1202 20:42:46.277856 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xz7jb"/"kube-root-ca.crt" Dec 02 20:42:46 crc kubenswrapper[4796]: I1202 20:42:46.288995 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xz7jb/must-gather-pnp66"] Dec 02 20:42:46 crc kubenswrapper[4796]: I1202 20:42:46.363622 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z48fq\" (UniqueName: \"kubernetes.io/projected/c58d5df7-1f09-4316-863f-c1e7d907fd55-kube-api-access-z48fq\") pod \"must-gather-pnp66\" (UID: \"c58d5df7-1f09-4316-863f-c1e7d907fd55\") " pod="openshift-must-gather-xz7jb/must-gather-pnp66" Dec 02 20:42:46 crc kubenswrapper[4796]: I1202 20:42:46.363718 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c58d5df7-1f09-4316-863f-c1e7d907fd55-must-gather-output\") pod \"must-gather-pnp66\" (UID: \"c58d5df7-1f09-4316-863f-c1e7d907fd55\") " pod="openshift-must-gather-xz7jb/must-gather-pnp66" Dec 02 20:42:46 crc kubenswrapper[4796]: I1202 20:42:46.464819 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z48fq\" (UniqueName: \"kubernetes.io/projected/c58d5df7-1f09-4316-863f-c1e7d907fd55-kube-api-access-z48fq\") pod \"must-gather-pnp66\" (UID: \"c58d5df7-1f09-4316-863f-c1e7d907fd55\") " pod="openshift-must-gather-xz7jb/must-gather-pnp66" Dec 02 20:42:46 crc kubenswrapper[4796]: I1202 20:42:46.464889 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c58d5df7-1f09-4316-863f-c1e7d907fd55-must-gather-output\") pod \"must-gather-pnp66\" (UID: \"c58d5df7-1f09-4316-863f-c1e7d907fd55\") " pod="openshift-must-gather-xz7jb/must-gather-pnp66" Dec 02 20:42:46 crc kubenswrapper[4796]: I1202 20:42:46.465321 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c58d5df7-1f09-4316-863f-c1e7d907fd55-must-gather-output\") pod \"must-gather-pnp66\" (UID: \"c58d5df7-1f09-4316-863f-c1e7d907fd55\") " pod="openshift-must-gather-xz7jb/must-gather-pnp66" Dec 02 20:42:46 crc kubenswrapper[4796]: I1202 20:42:46.487851 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z48fq\" (UniqueName: \"kubernetes.io/projected/c58d5df7-1f09-4316-863f-c1e7d907fd55-kube-api-access-z48fq\") pod \"must-gather-pnp66\" (UID: \"c58d5df7-1f09-4316-863f-c1e7d907fd55\") " pod="openshift-must-gather-xz7jb/must-gather-pnp66" Dec 02 20:42:46 crc kubenswrapper[4796]: I1202 20:42:46.578887 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xz7jb/must-gather-pnp66" Dec 02 20:42:47 crc kubenswrapper[4796]: I1202 20:42:47.043691 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xz7jb/must-gather-pnp66"] Dec 02 20:42:47 crc kubenswrapper[4796]: I1202 20:42:47.576107 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xz7jb/must-gather-pnp66" event={"ID":"c58d5df7-1f09-4316-863f-c1e7d907fd55","Type":"ContainerStarted","Data":"e38eb4a4affff85c9cdee0b8c7ffd89f9758572fa026d50b0142f9dcd6982681"} Dec 02 20:42:52 crc kubenswrapper[4796]: I1202 20:42:52.651479 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xz7jb/must-gather-pnp66" event={"ID":"c58d5df7-1f09-4316-863f-c1e7d907fd55","Type":"ContainerStarted","Data":"63139767a2a77ae4d332a33939470096dab4be936da23652153cb644282262ff"} Dec 02 20:42:52 crc kubenswrapper[4796]: I1202 20:42:52.652390 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xz7jb/must-gather-pnp66" event={"ID":"c58d5df7-1f09-4316-863f-c1e7d907fd55","Type":"ContainerStarted","Data":"8e9a889f95d7ae9fc6a1f95cf1e9ad54157a09c5fbbc78e782f3e8f68daa6fba"} Dec 02 20:42:52 crc kubenswrapper[4796]: I1202 20:42:52.676938 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xz7jb/must-gather-pnp66" podStartSLOduration=2.006821313 podStartE2EDuration="6.676915587s" podCreationTimestamp="2025-12-02 20:42:46 +0000 UTC" firstStartedPulling="2025-12-02 20:42:47.055565604 +0000 UTC m=+1850.058941138" lastFinishedPulling="2025-12-02 20:42:51.725659888 +0000 UTC m=+1854.729035412" observedRunningTime="2025-12-02 20:42:52.672175564 +0000 UTC m=+1855.675551098" watchObservedRunningTime="2025-12-02 20:42:52.676915587 +0000 UTC m=+1855.680291121" Dec 02 20:42:58 crc kubenswrapper[4796]: I1202 20:42:58.264763 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:42:58 crc kubenswrapper[4796]: I1202 20:42:58.696436 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerStarted","Data":"c4b16f65e6cf8140dac6fd968d52d0328394c28b2ff197e938fef2e061c5b2e3"} Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.235538 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mvfxk"] Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.238658 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.249875 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvfxk"] Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.255472 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-utilities\") pod \"certified-operators-mvfxk\" (UID: \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\") " pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.255552 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6tlk\" (UniqueName: \"kubernetes.io/projected/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-kube-api-access-w6tlk\") pod \"certified-operators-mvfxk\" (UID: \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\") " pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.255686 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-catalog-content\") pod \"certified-operators-mvfxk\" (UID: \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\") " pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.357709 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-utilities\") pod \"certified-operators-mvfxk\" (UID: \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\") " pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.357799 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6tlk\" (UniqueName: \"kubernetes.io/projected/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-kube-api-access-w6tlk\") pod \"certified-operators-mvfxk\" (UID: \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\") " pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.357941 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-catalog-content\") pod \"certified-operators-mvfxk\" (UID: \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\") " pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.358562 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-catalog-content\") pod \"certified-operators-mvfxk\" (UID: \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\") " pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.358568 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-utilities\") pod \"certified-operators-mvfxk\" (UID: \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\") " pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.393969 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6tlk\" (UniqueName: \"kubernetes.io/projected/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-kube-api-access-w6tlk\") pod \"certified-operators-mvfxk\" (UID: \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\") " pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:43:54 crc kubenswrapper[4796]: I1202 20:43:54.568561 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:43:55 crc kubenswrapper[4796]: I1202 20:43:55.095060 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvfxk"] Dec 02 20:43:55 crc kubenswrapper[4796]: I1202 20:43:55.235237 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvfxk" event={"ID":"9ed9267c-482b-4632-a6aa-c6b4998b3cd3","Type":"ContainerStarted","Data":"fa8fb0c9b08b36e24a6122eb99d9b2c099a0f4545811e9aad64510cbea248ea8"} Dec 02 20:43:56 crc kubenswrapper[4796]: I1202 20:43:56.246520 4796 generic.go:334] "Generic (PLEG): container finished" podID="9ed9267c-482b-4632-a6aa-c6b4998b3cd3" containerID="f288d40b1604fa1673d2900cdff4875c807bb8e240eafd8f3734a581aa4893cc" exitCode=0 Dec 02 20:43:56 crc kubenswrapper[4796]: I1202 20:43:56.246588 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvfxk" event={"ID":"9ed9267c-482b-4632-a6aa-c6b4998b3cd3","Type":"ContainerDied","Data":"f288d40b1604fa1673d2900cdff4875c807bb8e240eafd8f3734a581aa4893cc"} Dec 02 20:43:56 crc kubenswrapper[4796]: I1202 20:43:56.249041 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 20:43:58 crc kubenswrapper[4796]: I1202 20:43:58.277321 4796 generic.go:334] "Generic (PLEG): container finished" podID="9ed9267c-482b-4632-a6aa-c6b4998b3cd3" containerID="e42d2631257a7c6753a29297aa7f50f71bcbb8f935fc39fbb13f6f785fc2ccad" exitCode=0 Dec 02 20:43:58 crc kubenswrapper[4796]: I1202 20:43:58.277731 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvfxk" event={"ID":"9ed9267c-482b-4632-a6aa-c6b4998b3cd3","Type":"ContainerDied","Data":"e42d2631257a7c6753a29297aa7f50f71bcbb8f935fc39fbb13f6f785fc2ccad"} Dec 02 20:43:59 crc kubenswrapper[4796]: I1202 20:43:59.289939 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvfxk" event={"ID":"9ed9267c-482b-4632-a6aa-c6b4998b3cd3","Type":"ContainerStarted","Data":"99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33"} Dec 02 20:43:59 crc kubenswrapper[4796]: I1202 20:43:59.316790 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mvfxk" podStartSLOduration=2.851751692 podStartE2EDuration="5.316772435s" podCreationTimestamp="2025-12-02 20:43:54 +0000 UTC" firstStartedPulling="2025-12-02 20:43:56.248786587 +0000 UTC m=+1919.252162121" lastFinishedPulling="2025-12-02 20:43:58.71380733 +0000 UTC m=+1921.717182864" observedRunningTime="2025-12-02 20:43:59.315017683 +0000 UTC m=+1922.318393217" watchObservedRunningTime="2025-12-02 20:43:59.316772435 +0000 UTC m=+1922.320147969" Dec 02 20:44:04 crc kubenswrapper[4796]: I1202 20:44:04.568912 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:44:04 crc kubenswrapper[4796]: I1202 20:44:04.569474 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:44:04 crc kubenswrapper[4796]: I1202 20:44:04.617658 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:44:05 crc kubenswrapper[4796]: I1202 20:44:05.385799 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:44:05 crc kubenswrapper[4796]: I1202 20:44:05.928732 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx_819b363e-8c48-4778-9f8f-b37c1e2e7bd9/util/0.log" Dec 02 20:44:06 crc kubenswrapper[4796]: I1202 20:44:06.141364 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx_819b363e-8c48-4778-9f8f-b37c1e2e7bd9/util/0.log" Dec 02 20:44:06 crc kubenswrapper[4796]: I1202 20:44:06.192968 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx_819b363e-8c48-4778-9f8f-b37c1e2e7bd9/pull/0.log" Dec 02 20:44:06 crc kubenswrapper[4796]: I1202 20:44:06.211511 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx_819b363e-8c48-4778-9f8f-b37c1e2e7bd9/pull/0.log" Dec 02 20:44:06 crc kubenswrapper[4796]: I1202 20:44:06.424539 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx_819b363e-8c48-4778-9f8f-b37c1e2e7bd9/util/0.log" Dec 02 20:44:06 crc kubenswrapper[4796]: I1202 20:44:06.427505 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx_819b363e-8c48-4778-9f8f-b37c1e2e7bd9/extract/0.log" Dec 02 20:44:06 crc kubenswrapper[4796]: I1202 20:44:06.468539 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10ddc2db95747d078ebd9f771b75baa80e851195a4bcbce3b6c4e62b2ch9jnx_819b363e-8c48-4778-9f8f-b37c1e2e7bd9/pull/0.log" Dec 02 20:44:06 crc kubenswrapper[4796]: I1202 20:44:06.615052 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-cshfw_8161682e-0c53-41ab-bef7-99766302c3eb/kube-rbac-proxy/0.log" Dec 02 20:44:06 crc kubenswrapper[4796]: I1202 20:44:06.621504 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-cshfw_8161682e-0c53-41ab-bef7-99766302c3eb/manager/0.log" Dec 02 20:44:06 crc kubenswrapper[4796]: I1202 20:44:06.750689 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-jwbj8_c4bff453-90ae-481c-8027-5eca98e48917/kube-rbac-proxy/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.033908 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-jwbj8_c4bff453-90ae-481c-8027-5eca98e48917/manager/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.121964 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-grxnn_ac795881-0ae6-43cf-9a1d-119408238bf6/kube-rbac-proxy/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.194334 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-grxnn_ac795881-0ae6-43cf-9a1d-119408238bf6/manager/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.301773 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln_1f1197d5-f1e2-422c-93ba-2150f58ea971/util/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.489044 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln_1f1197d5-f1e2-422c-93ba-2150f58ea971/pull/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.517044 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln_1f1197d5-f1e2-422c-93ba-2150f58ea971/util/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.536690 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln_1f1197d5-f1e2-422c-93ba-2150f58ea971/pull/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.693721 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln_1f1197d5-f1e2-422c-93ba-2150f58ea971/extract/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.699196 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln_1f1197d5-f1e2-422c-93ba-2150f58ea971/pull/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.739526 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc96d06c3e39ea04b86eb6900b16243d90f8687a90757dd22bb5b9493cswlln_1f1197d5-f1e2-422c-93ba-2150f58ea971/util/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.892540 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-tsrx8_9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77/kube-rbac-proxy/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.900666 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-tsrx8_9f64c0f7-3638-4f4c-bf3e-8ab0da4f2f77/manager/0.log" Dec 02 20:44:07 crc kubenswrapper[4796]: I1202 20:44:07.951353 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-shlwt_8cff73ad-f4f3-47a7-8ba1-985614f757a3/kube-rbac-proxy/0.log" Dec 02 20:44:08 crc kubenswrapper[4796]: I1202 20:44:08.090872 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-shlwt_8cff73ad-f4f3-47a7-8ba1-985614f757a3/manager/0.log" Dec 02 20:44:08 crc kubenswrapper[4796]: I1202 20:44:08.180665 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-8fc8h_b751e4e4-8d34-4fcc-baca-1e0eea85f1b9/kube-rbac-proxy/0.log" Dec 02 20:44:08 crc kubenswrapper[4796]: I1202 20:44:08.184112 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-8fc8h_b751e4e4-8d34-4fcc-baca-1e0eea85f1b9/manager/0.log" Dec 02 20:44:08 crc kubenswrapper[4796]: I1202 20:44:08.214833 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvfxk"] Dec 02 20:44:08 crc kubenswrapper[4796]: I1202 20:44:08.215223 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mvfxk" podUID="9ed9267c-482b-4632-a6aa-c6b4998b3cd3" containerName="registry-server" containerID="cri-o://99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33" gracePeriod=2 Dec 02 20:44:08 crc kubenswrapper[4796]: I1202 20:44:08.380365 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-n5m58_98ce5936-d2d0-480f-bbd6-79e074aa862c/kube-rbac-proxy/0.log" Dec 02 20:44:08 crc kubenswrapper[4796]: I1202 20:44:08.592137 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-bv8px_741037fd-f9c3-4461-8f0c-d94f1f869ec0/kube-rbac-proxy/0.log" Dec 02 20:44:08 crc kubenswrapper[4796]: I1202 20:44:08.626829 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-n5m58_98ce5936-d2d0-480f-bbd6-79e074aa862c/manager/0.log" Dec 02 20:44:08 crc kubenswrapper[4796]: I1202 20:44:08.646980 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-bv8px_741037fd-f9c3-4461-8f0c-d94f1f869ec0/manager/0.log" Dec 02 20:44:08 crc kubenswrapper[4796]: I1202 20:44:08.763743 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-qccgk_1180fd08-546d-431c-9583-10fef2f94b1f/kube-rbac-proxy/0.log" Dec 02 20:44:08 crc kubenswrapper[4796]: I1202 20:44:08.892729 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-qccgk_1180fd08-546d-431c-9583-10fef2f94b1f/manager/0.log" Dec 02 20:44:08 crc kubenswrapper[4796]: I1202 20:44:08.929080 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-cnfjq_84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6/kube-rbac-proxy/0.log" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.096417 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-cnfjq_84c8fa19-9b42-4ac0-bcf2-2f7b3450c8f6/manager/0.log" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.171792 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.189566 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-4qgct_8fa1598d-65f1-4841-8609-2c07e7dc8ffd/manager/0.log" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.194564 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-4qgct_8fa1598d-65f1-4841-8609-2c07e7dc8ffd/kube-rbac-proxy/0.log" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.354502 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6tlk\" (UniqueName: \"kubernetes.io/projected/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-kube-api-access-w6tlk\") pod \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\" (UID: \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\") " Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.354727 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-catalog-content\") pod \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\" (UID: \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\") " Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.354939 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-utilities\") pod \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\" (UID: \"9ed9267c-482b-4632-a6aa-c6b4998b3cd3\") " Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.355634 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-utilities" (OuterVolumeSpecName: "utilities") pod "9ed9267c-482b-4632-a6aa-c6b4998b3cd3" (UID: "9ed9267c-482b-4632-a6aa-c6b4998b3cd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.370670 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-kube-api-access-w6tlk" (OuterVolumeSpecName: "kube-api-access-w6tlk") pod "9ed9267c-482b-4632-a6aa-c6b4998b3cd3" (UID: "9ed9267c-482b-4632-a6aa-c6b4998b3cd3"). InnerVolumeSpecName "kube-api-access-w6tlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.386051 4796 generic.go:334] "Generic (PLEG): container finished" podID="9ed9267c-482b-4632-a6aa-c6b4998b3cd3" containerID="99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33" exitCode=0 Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.386095 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvfxk" event={"ID":"9ed9267c-482b-4632-a6aa-c6b4998b3cd3","Type":"ContainerDied","Data":"99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33"} Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.386121 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvfxk" event={"ID":"9ed9267c-482b-4632-a6aa-c6b4998b3cd3","Type":"ContainerDied","Data":"fa8fb0c9b08b36e24a6122eb99d9b2c099a0f4545811e9aad64510cbea248ea8"} Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.386140 4796 scope.go:117] "RemoveContainer" containerID="99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.386278 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvfxk" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.415721 4796 scope.go:117] "RemoveContainer" containerID="e42d2631257a7c6753a29297aa7f50f71bcbb8f935fc39fbb13f6f785fc2ccad" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.416993 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ed9267c-482b-4632-a6aa-c6b4998b3cd3" (UID: "9ed9267c-482b-4632-a6aa-c6b4998b3cd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.445700 4796 scope.go:117] "RemoveContainer" containerID="f288d40b1604fa1673d2900cdff4875c807bb8e240eafd8f3734a581aa4893cc" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.457311 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.457344 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.457354 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6tlk\" (UniqueName: \"kubernetes.io/projected/9ed9267c-482b-4632-a6aa-c6b4998b3cd3-kube-api-access-w6tlk\") on node \"crc\" DevicePath \"\"" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.466163 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-s988m_87e62b79-fb94-4209-88b2-c2b6b0966181/kube-rbac-proxy/0.log" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.468696 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-s988m_87e62b79-fb94-4209-88b2-c2b6b0966181/manager/0.log" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.475160 4796 scope.go:117] "RemoveContainer" containerID="99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33" Dec 02 20:44:09 crc kubenswrapper[4796]: E1202 20:44:09.475584 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33\": container with ID starting with 99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33 not found: ID does not exist" containerID="99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.475616 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33"} err="failed to get container status \"99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33\": rpc error: code = NotFound desc = could not find container \"99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33\": container with ID starting with 99ae3414641a3befda11fa31daf094f2e871b119c5c01350450e3f0a0c1f6a33 not found: ID does not exist" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.475642 4796 scope.go:117] "RemoveContainer" containerID="e42d2631257a7c6753a29297aa7f50f71bcbb8f935fc39fbb13f6f785fc2ccad" Dec 02 20:44:09 crc kubenswrapper[4796]: E1202 20:44:09.476047 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42d2631257a7c6753a29297aa7f50f71bcbb8f935fc39fbb13f6f785fc2ccad\": container with ID starting with e42d2631257a7c6753a29297aa7f50f71bcbb8f935fc39fbb13f6f785fc2ccad not found: ID does not exist" containerID="e42d2631257a7c6753a29297aa7f50f71bcbb8f935fc39fbb13f6f785fc2ccad" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.476071 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42d2631257a7c6753a29297aa7f50f71bcbb8f935fc39fbb13f6f785fc2ccad"} err="failed to get container status \"e42d2631257a7c6753a29297aa7f50f71bcbb8f935fc39fbb13f6f785fc2ccad\": rpc error: code = NotFound desc = could not find container \"e42d2631257a7c6753a29297aa7f50f71bcbb8f935fc39fbb13f6f785fc2ccad\": container with ID starting with e42d2631257a7c6753a29297aa7f50f71bcbb8f935fc39fbb13f6f785fc2ccad not found: ID does not exist" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.476086 4796 scope.go:117] "RemoveContainer" containerID="f288d40b1604fa1673d2900cdff4875c807bb8e240eafd8f3734a581aa4893cc" Dec 02 20:44:09 crc kubenswrapper[4796]: E1202 20:44:09.476307 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f288d40b1604fa1673d2900cdff4875c807bb8e240eafd8f3734a581aa4893cc\": container with ID starting with f288d40b1604fa1673d2900cdff4875c807bb8e240eafd8f3734a581aa4893cc not found: ID does not exist" containerID="f288d40b1604fa1673d2900cdff4875c807bb8e240eafd8f3734a581aa4893cc" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.476327 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f288d40b1604fa1673d2900cdff4875c807bb8e240eafd8f3734a581aa4893cc"} err="failed to get container status \"f288d40b1604fa1673d2900cdff4875c807bb8e240eafd8f3734a581aa4893cc\": rpc error: code = NotFound desc = could not find container \"f288d40b1604fa1673d2900cdff4875c807bb8e240eafd8f3734a581aa4893cc\": container with ID starting with f288d40b1604fa1673d2900cdff4875c807bb8e240eafd8f3734a581aa4893cc not found: ID does not exist" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.584082 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-t8wwl_936371ab-213c-4f66-92ce-c9f5a79e3aa6/kube-rbac-proxy/0.log" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.716571 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvfxk"] Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.724732 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mvfxk"] Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.746025 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-t8wwl_936371ab-213c-4f66-92ce-c9f5a79e3aa6/manager/0.log" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.779884 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-tmdq9_8f4b078d-0d59-4f07-97d7-0537e71a7770/kube-rbac-proxy/0.log" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.821481 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-tmdq9_8f4b078d-0d59-4f07-97d7-0537e71a7770/manager/0.log" Dec 02 20:44:09 crc kubenswrapper[4796]: I1202 20:44:09.958362 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9_cefeb832-8af2-4be7-a143-a5ee5e28a091/kube-rbac-proxy/0.log" Dec 02 20:44:10 crc kubenswrapper[4796]: I1202 20:44:10.085773 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4nbjm9_cefeb832-8af2-4be7-a143-a5ee5e28a091/manager/0.log" Dec 02 20:44:10 crc kubenswrapper[4796]: I1202 20:44:10.222518 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7mnzm_212e8079-da95-4bd2-8619-d8a0c239511d/registry-server/0.log" Dec 02 20:44:10 crc kubenswrapper[4796]: I1202 20:44:10.400263 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-5zl48_4f7d1bce-8d2f-4f79-9b65-067b14abc6ca/manager/0.log" Dec 02 20:44:10 crc kubenswrapper[4796]: I1202 20:44:10.412294 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-5zl48_4f7d1bce-8d2f-4f79-9b65-067b14abc6ca/kube-rbac-proxy/0.log" Dec 02 20:44:10 crc kubenswrapper[4796]: I1202 20:44:10.464365 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-754dcb5d59-v9gsq_3117ee94-1d46-46b4-b567-508d22bc6bac/manager/0.log" Dec 02 20:44:10 crc kubenswrapper[4796]: I1202 20:44:10.587496 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-6wk6f_07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7/kube-rbac-proxy/0.log" Dec 02 20:44:10 crc kubenswrapper[4796]: I1202 20:44:10.588002 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-6wk6f_07c7a00f-b0ab-4f7a-bbfe-2b137d1541b7/manager/0.log" Dec 02 20:44:10 crc kubenswrapper[4796]: I1202 20:44:10.648472 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-27kww_0f460523-61e4-4cb6-9642-908cfa76579d/operator/0.log" Dec 02 20:44:10 crc kubenswrapper[4796]: I1202 20:44:10.774618 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-8tgcn_a5e8b895-e788-44f4-8481-520f1cbd75c0/kube-rbac-proxy/0.log" Dec 02 20:44:10 crc kubenswrapper[4796]: I1202 20:44:10.794628 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-8tgcn_a5e8b895-e788-44f4-8481-520f1cbd75c0/manager/0.log" Dec 02 20:44:10 crc kubenswrapper[4796]: I1202 20:44:10.843800 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-lgj8f_0b0e9209-3a80-4f10-9b56-4d3d28d0dee2/kube-rbac-proxy/0.log" Dec 02 20:44:11 crc kubenswrapper[4796]: I1202 20:44:11.028561 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-j9v7j_7b338d05-8a86-49bf-b996-71e686d384b2/kube-rbac-proxy/0.log" Dec 02 20:44:11 crc kubenswrapper[4796]: I1202 20:44:11.050638 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-j9v7j_7b338d05-8a86-49bf-b996-71e686d384b2/manager/0.log" Dec 02 20:44:11 crc kubenswrapper[4796]: I1202 20:44:11.073496 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-lgj8f_0b0e9209-3a80-4f10-9b56-4d3d28d0dee2/manager/0.log" Dec 02 20:44:11 crc kubenswrapper[4796]: I1202 20:44:11.282177 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed9267c-482b-4632-a6aa-c6b4998b3cd3" path="/var/lib/kubelet/pods/9ed9267c-482b-4632-a6aa-c6b4998b3cd3/volumes" Dec 02 20:44:11 crc kubenswrapper[4796]: I1202 20:44:11.343500 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-xhnj2_9388a7aa-ece0-4878-8d63-c095adf67256/registry-server/0.log" Dec 02 20:44:11 crc kubenswrapper[4796]: I1202 20:44:11.500991 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-655f76fc94-wn6n4_89390c72-5591-46ee-8b6d-71268195b622/manager/0.log" Dec 02 20:44:28 crc kubenswrapper[4796]: I1202 20:44:28.711201 4796 scope.go:117] "RemoveContainer" containerID="c556a6c1b56b5c951afcf1365f2a132dc668407f177435ffdff0979a9b5a0f0d" Dec 02 20:44:29 crc kubenswrapper[4796]: I1202 20:44:29.023932 4796 scope.go:117] "RemoveContainer" containerID="b7997d961aafdc2d72e2e27bb86c9af7c7035cf6a7dbc35fd9582318f5a98b73" Dec 02 20:44:29 crc kubenswrapper[4796]: I1202 20:44:29.090950 4796 scope.go:117] "RemoveContainer" containerID="4e87962bb1ba0cfaa517f073c2d8fe6de1c7c26d553768a3ad48401148111c02" Dec 02 20:44:29 crc kubenswrapper[4796]: I1202 20:44:29.129197 4796 scope.go:117] "RemoveContainer" containerID="2ebb6b05c1b1e70a104c5eccb04bed4e1982b1985585af15cb4f67130dcd4e1d" Dec 02 20:44:29 crc kubenswrapper[4796]: I1202 20:44:29.150761 4796 scope.go:117] "RemoveContainer" containerID="448c4a0205d04bd542a12e6b1d636a513e6cad7a10a5b861b7f952db0ea0ad85" Dec 02 20:44:29 crc kubenswrapper[4796]: I1202 20:44:29.191194 4796 scope.go:117] "RemoveContainer" containerID="a03a4b356050b4fa2297f8bc886c27a2381c1c565aa7f2c1b9a47258f297a073" Dec 02 20:44:34 crc kubenswrapper[4796]: I1202 20:44:34.842819 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8qqvs_3c7bea3f-e468-4655-b241-ae19d336c6c0/control-plane-machine-set-operator/0.log" Dec 02 20:44:34 crc kubenswrapper[4796]: I1202 20:44:34.976498 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-p6qqx_f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a/kube-rbac-proxy/0.log" Dec 02 20:44:35 crc kubenswrapper[4796]: I1202 20:44:35.073802 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-p6qqx_f24621ef-6ee6-4d48-bcdb-ca5f8bb7731a/machine-api-operator/0.log" Dec 02 20:44:50 crc kubenswrapper[4796]: I1202 20:44:50.475876 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-4ct7l_25f9ec51-75dd-40f8-b598-cfae95b84574/cert-manager-controller/0.log" Dec 02 20:44:50 crc kubenswrapper[4796]: I1202 20:44:50.723294 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-p48zl_cb8821bc-4ada-47cf-88c5-210a84203e01/cert-manager-webhook/0.log" Dec 02 20:44:50 crc kubenswrapper[4796]: I1202 20:44:50.772295 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-fwgmt_e17129f0-a2dc-48f9-94e9-08d6cc73319d/cert-manager-cainjector/0.log" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.149327 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm"] Dec 02 20:45:00 crc kubenswrapper[4796]: E1202 20:45:00.150363 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed9267c-482b-4632-a6aa-c6b4998b3cd3" containerName="registry-server" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.150380 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed9267c-482b-4632-a6aa-c6b4998b3cd3" containerName="registry-server" Dec 02 20:45:00 crc kubenswrapper[4796]: E1202 20:45:00.150397 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed9267c-482b-4632-a6aa-c6b4998b3cd3" containerName="extract-utilities" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.150405 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed9267c-482b-4632-a6aa-c6b4998b3cd3" containerName="extract-utilities" Dec 02 20:45:00 crc kubenswrapper[4796]: E1202 20:45:00.150422 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed9267c-482b-4632-a6aa-c6b4998b3cd3" containerName="extract-content" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.150430 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed9267c-482b-4632-a6aa-c6b4998b3cd3" containerName="extract-content" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.150634 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed9267c-482b-4632-a6aa-c6b4998b3cd3" containerName="registry-server" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.151230 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.153365 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.156077 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.160328 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm"] Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.266157 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5346177c-9ae4-418a-9f07-28d4a1097b11-secret-volume\") pod \"collect-profiles-29411805-v4svm\" (UID: \"5346177c-9ae4-418a-9f07-28d4a1097b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.266269 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5346177c-9ae4-418a-9f07-28d4a1097b11-config-volume\") pod \"collect-profiles-29411805-v4svm\" (UID: \"5346177c-9ae4-418a-9f07-28d4a1097b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.266300 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2h8h\" (UniqueName: \"kubernetes.io/projected/5346177c-9ae4-418a-9f07-28d4a1097b11-kube-api-access-r2h8h\") pod \"collect-profiles-29411805-v4svm\" (UID: \"5346177c-9ae4-418a-9f07-28d4a1097b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.367533 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5346177c-9ae4-418a-9f07-28d4a1097b11-config-volume\") pod \"collect-profiles-29411805-v4svm\" (UID: \"5346177c-9ae4-418a-9f07-28d4a1097b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.367586 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2h8h\" (UniqueName: \"kubernetes.io/projected/5346177c-9ae4-418a-9f07-28d4a1097b11-kube-api-access-r2h8h\") pod \"collect-profiles-29411805-v4svm\" (UID: \"5346177c-9ae4-418a-9f07-28d4a1097b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.367667 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5346177c-9ae4-418a-9f07-28d4a1097b11-secret-volume\") pod \"collect-profiles-29411805-v4svm\" (UID: \"5346177c-9ae4-418a-9f07-28d4a1097b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.368747 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5346177c-9ae4-418a-9f07-28d4a1097b11-config-volume\") pod \"collect-profiles-29411805-v4svm\" (UID: \"5346177c-9ae4-418a-9f07-28d4a1097b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.373850 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5346177c-9ae4-418a-9f07-28d4a1097b11-secret-volume\") pod \"collect-profiles-29411805-v4svm\" (UID: \"5346177c-9ae4-418a-9f07-28d4a1097b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.389238 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2h8h\" (UniqueName: \"kubernetes.io/projected/5346177c-9ae4-418a-9f07-28d4a1097b11-kube-api-access-r2h8h\") pod \"collect-profiles-29411805-v4svm\" (UID: \"5346177c-9ae4-418a-9f07-28d4a1097b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.477853 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:00 crc kubenswrapper[4796]: I1202 20:45:00.953595 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm"] Dec 02 20:45:01 crc kubenswrapper[4796]: I1202 20:45:01.859844 4796 generic.go:334] "Generic (PLEG): container finished" podID="5346177c-9ae4-418a-9f07-28d4a1097b11" containerID="04338da20299b3e557df05e9d79929958134756951d16db3e26e47ee7eb1558f" exitCode=0 Dec 02 20:45:01 crc kubenswrapper[4796]: I1202 20:45:01.859965 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" event={"ID":"5346177c-9ae4-418a-9f07-28d4a1097b11","Type":"ContainerDied","Data":"04338da20299b3e557df05e9d79929958134756951d16db3e26e47ee7eb1558f"} Dec 02 20:45:01 crc kubenswrapper[4796]: I1202 20:45:01.860229 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" event={"ID":"5346177c-9ae4-418a-9f07-28d4a1097b11","Type":"ContainerStarted","Data":"fae394fec3d77a8b1d9eab8de285b92abe152f2089722b13ed1a12d9e1a3629d"} Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.193519 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.317291 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5346177c-9ae4-418a-9f07-28d4a1097b11-secret-volume\") pod \"5346177c-9ae4-418a-9f07-28d4a1097b11\" (UID: \"5346177c-9ae4-418a-9f07-28d4a1097b11\") " Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.317379 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5346177c-9ae4-418a-9f07-28d4a1097b11-config-volume\") pod \"5346177c-9ae4-418a-9f07-28d4a1097b11\" (UID: \"5346177c-9ae4-418a-9f07-28d4a1097b11\") " Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.317448 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2h8h\" (UniqueName: \"kubernetes.io/projected/5346177c-9ae4-418a-9f07-28d4a1097b11-kube-api-access-r2h8h\") pod \"5346177c-9ae4-418a-9f07-28d4a1097b11\" (UID: \"5346177c-9ae4-418a-9f07-28d4a1097b11\") " Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.318875 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5346177c-9ae4-418a-9f07-28d4a1097b11-config-volume" (OuterVolumeSpecName: "config-volume") pod "5346177c-9ae4-418a-9f07-28d4a1097b11" (UID: "5346177c-9ae4-418a-9f07-28d4a1097b11"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.327468 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5346177c-9ae4-418a-9f07-28d4a1097b11-kube-api-access-r2h8h" (OuterVolumeSpecName: "kube-api-access-r2h8h") pod "5346177c-9ae4-418a-9f07-28d4a1097b11" (UID: "5346177c-9ae4-418a-9f07-28d4a1097b11"). InnerVolumeSpecName "kube-api-access-r2h8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.327754 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5346177c-9ae4-418a-9f07-28d4a1097b11-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5346177c-9ae4-418a-9f07-28d4a1097b11" (UID: "5346177c-9ae4-418a-9f07-28d4a1097b11"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.419156 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5346177c-9ae4-418a-9f07-28d4a1097b11-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.419200 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5346177c-9ae4-418a-9f07-28d4a1097b11-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.419216 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2h8h\" (UniqueName: \"kubernetes.io/projected/5346177c-9ae4-418a-9f07-28d4a1097b11-kube-api-access-r2h8h\") on node \"crc\" DevicePath \"\"" Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.879315 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" event={"ID":"5346177c-9ae4-418a-9f07-28d4a1097b11","Type":"ContainerDied","Data":"fae394fec3d77a8b1d9eab8de285b92abe152f2089722b13ed1a12d9e1a3629d"} Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.879351 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae394fec3d77a8b1d9eab8de285b92abe152f2089722b13ed1a12d9e1a3629d" Dec 02 20:45:03 crc kubenswrapper[4796]: I1202 20:45:03.879401 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411805-v4svm" Dec 02 20:45:04 crc kubenswrapper[4796]: I1202 20:45:04.308091 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z"] Dec 02 20:45:04 crc kubenswrapper[4796]: I1202 20:45:04.324472 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411760-zzp4z"] Dec 02 20:45:05 crc kubenswrapper[4796]: I1202 20:45:05.276292 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfed3adf-1892-4da3-8ffc-f4033036a4ca" path="/var/lib/kubelet/pods/bfed3adf-1892-4da3-8ffc-f4033036a4ca/volumes" Dec 02 20:45:06 crc kubenswrapper[4796]: I1202 20:45:06.810273 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-zbtm2_27412017-4447-4daa-817e-6bb21c045489/nmstate-console-plugin/0.log" Dec 02 20:45:07 crc kubenswrapper[4796]: I1202 20:45:07.056925 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7h74f_d870477b-675b-429c-9d60-0b482dcf4996/nmstate-handler/0.log" Dec 02 20:45:07 crc kubenswrapper[4796]: I1202 20:45:07.127181 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-bf7zw_6a9c5307-34c1-408b-87d4-b9d005660199/kube-rbac-proxy/0.log" Dec 02 20:45:07 crc kubenswrapper[4796]: I1202 20:45:07.249763 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-bf7zw_6a9c5307-34c1-408b-87d4-b9d005660199/nmstate-metrics/0.log" Dec 02 20:45:07 crc kubenswrapper[4796]: I1202 20:45:07.337633 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-clznf_faa51f2e-e5a4-4e9a-a808-2740d0511d04/nmstate-operator/0.log" Dec 02 20:45:07 crc kubenswrapper[4796]: I1202 20:45:07.476171 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-xnjvp_02b629d9-8fcd-4caf-95d7-b0cb92a1a76f/nmstate-webhook/0.log" Dec 02 20:45:25 crc kubenswrapper[4796]: I1202 20:45:25.189616 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:45:25 crc kubenswrapper[4796]: I1202 20:45:25.190623 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:45:25 crc kubenswrapper[4796]: I1202 20:45:25.541832 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-pw5b4_b7cba022-cc8d-457b-86e9-d97b822f03a4/kube-rbac-proxy/0.log" Dec 02 20:45:25 crc kubenswrapper[4796]: I1202 20:45:25.591790 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-pw5b4_b7cba022-cc8d-457b-86e9-d97b822f03a4/controller/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.071344 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/cp-frr-files/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.307394 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/cp-metrics/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.391072 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/cp-reloader/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.397424 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/cp-reloader/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.422137 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/cp-frr-files/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.597416 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/cp-frr-files/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.601126 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/cp-reloader/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.613843 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/cp-metrics/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.640598 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/cp-metrics/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.882120 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/controller/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.937560 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/cp-metrics/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.947586 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/cp-reloader/0.log" Dec 02 20:45:26 crc kubenswrapper[4796]: I1202 20:45:26.979166 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/cp-frr-files/0.log" Dec 02 20:45:27 crc kubenswrapper[4796]: I1202 20:45:27.187439 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/frr-metrics/0.log" Dec 02 20:45:27 crc kubenswrapper[4796]: I1202 20:45:27.189323 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/kube-rbac-proxy/0.log" Dec 02 20:45:27 crc kubenswrapper[4796]: I1202 20:45:27.208154 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/kube-rbac-proxy-frr/0.log" Dec 02 20:45:27 crc kubenswrapper[4796]: I1202 20:45:27.402668 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/reloader/0.log" Dec 02 20:45:27 crc kubenswrapper[4796]: I1202 20:45:27.420651 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-lz795_bbaf8931-75f7-42ae-a13d-69218a478762/frr-k8s-webhook-server/0.log" Dec 02 20:45:27 crc kubenswrapper[4796]: I1202 20:45:27.671589 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-746c7ccd88-dmrmm_5f2a6f4c-f52d-443a-bd27-8066451b87f2/manager/0.log" Dec 02 20:45:27 crc kubenswrapper[4796]: I1202 20:45:27.932099 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7h6dq_7f44f3c0-eb38-4b85-a27d-94aa92562837/kube-rbac-proxy/0.log" Dec 02 20:45:27 crc kubenswrapper[4796]: I1202 20:45:27.932571 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5d77994449-g7tpd_7622e83f-8b87-470f-b7b1-adb772c3cafa/webhook-server/0.log" Dec 02 20:45:28 crc kubenswrapper[4796]: I1202 20:45:28.222018 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bgvx8_15e0e87e-e84b-4d87-9e25-224fd500c3a6/frr/0.log" Dec 02 20:45:28 crc kubenswrapper[4796]: I1202 20:45:28.382981 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7h6dq_7f44f3c0-eb38-4b85-a27d-94aa92562837/speaker/0.log" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.315566 4796 scope.go:117] "RemoveContainer" containerID="c992c846d3d65fb3bfdbb7512755392dfbb8bdc8f864186bf3acd298cf1aa3a3" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.341776 4796 scope.go:117] "RemoveContainer" containerID="315836e3c4f18ebca59f4b0bc725aa52dbf8449c8292a10fac6b44dc47598897" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.385970 4796 scope.go:117] "RemoveContainer" containerID="bb23a8dd02fc62064058c3f08dfe276714c86c9f01f3bb81e50bd65734f03a95" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.421909 4796 scope.go:117] "RemoveContainer" containerID="78989ff6b440796ea937bd5933467798ff57dc131cfb36aae0fe286364e09a60" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.436720 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ncs85"] Dec 02 20:45:29 crc kubenswrapper[4796]: E1202 20:45:29.437070 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5346177c-9ae4-418a-9f07-28d4a1097b11" containerName="collect-profiles" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.437086 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5346177c-9ae4-418a-9f07-28d4a1097b11" containerName="collect-profiles" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.437242 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5346177c-9ae4-418a-9f07-28d4a1097b11" containerName="collect-profiles" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.438449 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.460927 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ncs85"] Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.484485 4796 scope.go:117] "RemoveContainer" containerID="985476a941d6b8412885000eaed827b938002fd9d3f46536988dfe36ebbf9a44" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.547449 4796 scope.go:117] "RemoveContainer" containerID="5c623060c714e15333e88b22dd21193efbaf6c2dc3e5595aa77e5605d43804d8" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.563576 4796 scope.go:117] "RemoveContainer" containerID="ab16702cdb1959f03fc3178301e16c70f8a51cef591e86b7f1e3777f639debf5" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.599343 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkt6c\" (UniqueName: \"kubernetes.io/projected/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-kube-api-access-rkt6c\") pod \"redhat-operators-ncs85\" (UID: \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\") " pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.599382 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-catalog-content\") pod \"redhat-operators-ncs85\" (UID: \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\") " pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.599405 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-utilities\") pod \"redhat-operators-ncs85\" (UID: \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\") " pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.701097 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkt6c\" (UniqueName: \"kubernetes.io/projected/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-kube-api-access-rkt6c\") pod \"redhat-operators-ncs85\" (UID: \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\") " pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.701141 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-catalog-content\") pod \"redhat-operators-ncs85\" (UID: \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\") " pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.701168 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-utilities\") pod \"redhat-operators-ncs85\" (UID: \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\") " pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.701681 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-utilities\") pod \"redhat-operators-ncs85\" (UID: \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\") " pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.701755 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-catalog-content\") pod \"redhat-operators-ncs85\" (UID: \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\") " pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.723039 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkt6c\" (UniqueName: \"kubernetes.io/projected/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-kube-api-access-rkt6c\") pod \"redhat-operators-ncs85\" (UID: \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\") " pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:29 crc kubenswrapper[4796]: I1202 20:45:29.783797 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:30 crc kubenswrapper[4796]: I1202 20:45:30.060805 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ncs85"] Dec 02 20:45:30 crc kubenswrapper[4796]: I1202 20:45:30.158594 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncs85" event={"ID":"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4","Type":"ContainerStarted","Data":"3a4e2e3137399ea14d21500b5335d7e62dd0a79314e094c474b0395b3d3e89d0"} Dec 02 20:45:31 crc kubenswrapper[4796]: I1202 20:45:31.175692 4796 generic.go:334] "Generic (PLEG): container finished" podID="c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" containerID="f03ce545b2a6b5d087ff82956ccca8044e8130a697842c00b38a81a00044c393" exitCode=0 Dec 02 20:45:31 crc kubenswrapper[4796]: I1202 20:45:31.176191 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncs85" event={"ID":"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4","Type":"ContainerDied","Data":"f03ce545b2a6b5d087ff82956ccca8044e8130a697842c00b38a81a00044c393"} Dec 02 20:45:32 crc kubenswrapper[4796]: I1202 20:45:32.185678 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncs85" event={"ID":"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4","Type":"ContainerStarted","Data":"ce8adb50ef1085d93191401ed5a2ece24d7a232379f7f0c8250f0a852214fb40"} Dec 02 20:45:33 crc kubenswrapper[4796]: I1202 20:45:33.195378 4796 generic.go:334] "Generic (PLEG): container finished" podID="c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" containerID="ce8adb50ef1085d93191401ed5a2ece24d7a232379f7f0c8250f0a852214fb40" exitCode=0 Dec 02 20:45:33 crc kubenswrapper[4796]: I1202 20:45:33.195484 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncs85" event={"ID":"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4","Type":"ContainerDied","Data":"ce8adb50ef1085d93191401ed5a2ece24d7a232379f7f0c8250f0a852214fb40"} Dec 02 20:45:34 crc kubenswrapper[4796]: I1202 20:45:34.206145 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncs85" event={"ID":"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4","Type":"ContainerStarted","Data":"b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0"} Dec 02 20:45:34 crc kubenswrapper[4796]: I1202 20:45:34.236808 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ncs85" podStartSLOduration=2.809375002 podStartE2EDuration="5.236793096s" podCreationTimestamp="2025-12-02 20:45:29 +0000 UTC" firstStartedPulling="2025-12-02 20:45:31.17934455 +0000 UTC m=+2014.182720084" lastFinishedPulling="2025-12-02 20:45:33.606762644 +0000 UTC m=+2016.610138178" observedRunningTime="2025-12-02 20:45:34.231588451 +0000 UTC m=+2017.234963985" watchObservedRunningTime="2025-12-02 20:45:34.236793096 +0000 UTC m=+2017.240168630" Dec 02 20:45:39 crc kubenswrapper[4796]: I1202 20:45:39.784976 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:39 crc kubenswrapper[4796]: I1202 20:45:39.785674 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:39 crc kubenswrapper[4796]: I1202 20:45:39.854756 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:40 crc kubenswrapper[4796]: I1202 20:45:40.313724 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:43 crc kubenswrapper[4796]: I1202 20:45:43.424090 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ncs85"] Dec 02 20:45:43 crc kubenswrapper[4796]: I1202 20:45:43.424592 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ncs85" podUID="c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" containerName="registry-server" containerID="cri-o://b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0" gracePeriod=2 Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.015308 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.193610 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-utilities\") pod \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\" (UID: \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\") " Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.194226 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkt6c\" (UniqueName: \"kubernetes.io/projected/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-kube-api-access-rkt6c\") pod \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\" (UID: \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\") " Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.194306 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-catalog-content\") pod \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\" (UID: \"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4\") " Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.194683 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-utilities" (OuterVolumeSpecName: "utilities") pod "c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" (UID: "c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.200403 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-kube-api-access-rkt6c" (OuterVolumeSpecName: "kube-api-access-rkt6c") pod "c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" (UID: "c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4"). InnerVolumeSpecName "kube-api-access-rkt6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.298913 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkt6c\" (UniqueName: \"kubernetes.io/projected/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-kube-api-access-rkt6c\") on node \"crc\" DevicePath \"\"" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.299042 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.300953 4796 generic.go:334] "Generic (PLEG): container finished" podID="c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" containerID="b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0" exitCode=0 Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.300998 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncs85" event={"ID":"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4","Type":"ContainerDied","Data":"b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0"} Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.301039 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncs85" event={"ID":"c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4","Type":"ContainerDied","Data":"3a4e2e3137399ea14d21500b5335d7e62dd0a79314e094c474b0395b3d3e89d0"} Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.301051 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncs85" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.301064 4796 scope.go:117] "RemoveContainer" containerID="b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.319686 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" (UID: "c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.343749 4796 scope.go:117] "RemoveContainer" containerID="ce8adb50ef1085d93191401ed5a2ece24d7a232379f7f0c8250f0a852214fb40" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.400939 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.409349 4796 scope.go:117] "RemoveContainer" containerID="f03ce545b2a6b5d087ff82956ccca8044e8130a697842c00b38a81a00044c393" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.440073 4796 scope.go:117] "RemoveContainer" containerID="b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0" Dec 02 20:45:45 crc kubenswrapper[4796]: E1202 20:45:45.448441 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0\": container with ID starting with b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0 not found: ID does not exist" containerID="b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.448492 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0"} err="failed to get container status \"b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0\": rpc error: code = NotFound desc = could not find container \"b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0\": container with ID starting with b0992687591deef5090d002ba0aa606feee28378e86018bead2391cca32850c0 not found: ID does not exist" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.448525 4796 scope.go:117] "RemoveContainer" containerID="ce8adb50ef1085d93191401ed5a2ece24d7a232379f7f0c8250f0a852214fb40" Dec 02 20:45:45 crc kubenswrapper[4796]: E1202 20:45:45.449511 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce8adb50ef1085d93191401ed5a2ece24d7a232379f7f0c8250f0a852214fb40\": container with ID starting with ce8adb50ef1085d93191401ed5a2ece24d7a232379f7f0c8250f0a852214fb40 not found: ID does not exist" containerID="ce8adb50ef1085d93191401ed5a2ece24d7a232379f7f0c8250f0a852214fb40" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.449566 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce8adb50ef1085d93191401ed5a2ece24d7a232379f7f0c8250f0a852214fb40"} err="failed to get container status \"ce8adb50ef1085d93191401ed5a2ece24d7a232379f7f0c8250f0a852214fb40\": rpc error: code = NotFound desc = could not find container \"ce8adb50ef1085d93191401ed5a2ece24d7a232379f7f0c8250f0a852214fb40\": container with ID starting with ce8adb50ef1085d93191401ed5a2ece24d7a232379f7f0c8250f0a852214fb40 not found: ID does not exist" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.449588 4796 scope.go:117] "RemoveContainer" containerID="f03ce545b2a6b5d087ff82956ccca8044e8130a697842c00b38a81a00044c393" Dec 02 20:45:45 crc kubenswrapper[4796]: E1202 20:45:45.450527 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f03ce545b2a6b5d087ff82956ccca8044e8130a697842c00b38a81a00044c393\": container with ID starting with f03ce545b2a6b5d087ff82956ccca8044e8130a697842c00b38a81a00044c393 not found: ID does not exist" containerID="f03ce545b2a6b5d087ff82956ccca8044e8130a697842c00b38a81a00044c393" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.450555 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f03ce545b2a6b5d087ff82956ccca8044e8130a697842c00b38a81a00044c393"} err="failed to get container status \"f03ce545b2a6b5d087ff82956ccca8044e8130a697842c00b38a81a00044c393\": rpc error: code = NotFound desc = could not find container \"f03ce545b2a6b5d087ff82956ccca8044e8130a697842c00b38a81a00044c393\": container with ID starting with f03ce545b2a6b5d087ff82956ccca8044e8130a697842c00b38a81a00044c393 not found: ID does not exist" Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.638091 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ncs85"] Dec 02 20:45:45 crc kubenswrapper[4796]: I1202 20:45:45.642103 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ncs85"] Dec 02 20:45:47 crc kubenswrapper[4796]: I1202 20:45:47.280044 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" path="/var/lib/kubelet/pods/c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4/volumes" Dec 02 20:45:55 crc kubenswrapper[4796]: I1202 20:45:55.189918 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:45:55 crc kubenswrapper[4796]: I1202 20:45:55.190715 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:45:56 crc kubenswrapper[4796]: I1202 20:45:56.609149 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_2b4c0794-6b59-4170-8508-0e37663c7094/init-config-reloader/0.log" Dec 02 20:45:56 crc kubenswrapper[4796]: I1202 20:45:56.762136 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_2b4c0794-6b59-4170-8508-0e37663c7094/init-config-reloader/0.log" Dec 02 20:45:56 crc kubenswrapper[4796]: I1202 20:45:56.783441 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_2b4c0794-6b59-4170-8508-0e37663c7094/config-reloader/0.log" Dec 02 20:45:56 crc kubenswrapper[4796]: I1202 20:45:56.818363 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_2b4c0794-6b59-4170-8508-0e37663c7094/alertmanager/0.log" Dec 02 20:45:56 crc kubenswrapper[4796]: I1202 20:45:56.938091 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6a49738d-5976-4610-af11-097c8f69f94d/ceilometer-central-agent/0.log" Dec 02 20:45:56 crc kubenswrapper[4796]: I1202 20:45:56.991388 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6a49738d-5976-4610-af11-097c8f69f94d/ceilometer-notification-agent/0.log" Dec 02 20:45:57 crc kubenswrapper[4796]: I1202 20:45:57.014142 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6a49738d-5976-4610-af11-097c8f69f94d/proxy-httpd/0.log" Dec 02 20:45:57 crc kubenswrapper[4796]: I1202 20:45:57.056609 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6a49738d-5976-4610-af11-097c8f69f94d/sg-core/0.log" Dec 02 20:45:57 crc kubenswrapper[4796]: I1202 20:45:57.204098 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-5d85f8c497-qddnq_fa9cece5-f7ba-4e35-918f-33714da53c64/keystone-api/0.log" Dec 02 20:45:57 crc kubenswrapper[4796]: I1202 20:45:57.289190 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-bootstrap-v825p_1ac1760c-0cd2-480a-8315-4054fc65f81a/keystone-bootstrap/0.log" Dec 02 20:45:57 crc kubenswrapper[4796]: I1202 20:45:57.418919 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_kube-state-metrics-0_353c1dae-27b5-40ce-b56c-f521add86d37/kube-state-metrics/0.log" Dec 02 20:45:57 crc kubenswrapper[4796]: I1202 20:45:57.730389 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_7c8100d7-1ae6-4220-88d4-527f681270b3/mysql-bootstrap/0.log" Dec 02 20:45:57 crc kubenswrapper[4796]: I1202 20:45:57.970745 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_7c8100d7-1ae6-4220-88d4-527f681270b3/mysql-bootstrap/0.log" Dec 02 20:45:57 crc kubenswrapper[4796]: I1202 20:45:57.975559 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_7c8100d7-1ae6-4220-88d4-527f681270b3/galera/0.log" Dec 02 20:45:58 crc kubenswrapper[4796]: I1202 20:45:58.167211 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstackclient_40949cd7-9274-47e5-bf48-306fed2c0360/openstackclient/0.log" Dec 02 20:45:58 crc kubenswrapper[4796]: I1202 20:45:58.232886 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_94835415-6d5d-492c-9a10-b023803a2978/init-config-reloader/0.log" Dec 02 20:45:58 crc kubenswrapper[4796]: I1202 20:45:58.554540 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_94835415-6d5d-492c-9a10-b023803a2978/init-config-reloader/0.log" Dec 02 20:45:58 crc kubenswrapper[4796]: I1202 20:45:58.555083 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_94835415-6d5d-492c-9a10-b023803a2978/config-reloader/0.log" Dec 02 20:45:58 crc kubenswrapper[4796]: I1202 20:45:58.696034 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_94835415-6d5d-492c-9a10-b023803a2978/prometheus/0.log" Dec 02 20:45:58 crc kubenswrapper[4796]: I1202 20:45:58.826645 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_94835415-6d5d-492c-9a10-b023803a2978/thanos-sidecar/0.log" Dec 02 20:45:59 crc kubenswrapper[4796]: I1202 20:45:59.034186 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_0add539b-c51e-4616-9235-12465a2e5ecb/setup-container/0.log" Dec 02 20:45:59 crc kubenswrapper[4796]: I1202 20:45:59.285909 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_0add539b-c51e-4616-9235-12465a2e5ecb/setup-container/0.log" Dec 02 20:45:59 crc kubenswrapper[4796]: I1202 20:45:59.441411 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_0add539b-c51e-4616-9235-12465a2e5ecb/rabbitmq/0.log" Dec 02 20:45:59 crc kubenswrapper[4796]: I1202 20:45:59.879224 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2/setup-container/0.log" Dec 02 20:46:00 crc kubenswrapper[4796]: I1202 20:46:00.102392 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2/setup-container/0.log" Dec 02 20:46:00 crc kubenswrapper[4796]: I1202 20:46:00.176235 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_17af6ffb-13e2-4d71-a7d1-df29d4e5c0f2/rabbitmq/0.log" Dec 02 20:46:06 crc kubenswrapper[4796]: I1202 20:46:06.286945 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_memcached-0_2919071b-425d-403e-9bff-fccdd9142500/memcached/0.log" Dec 02 20:46:19 crc kubenswrapper[4796]: I1202 20:46:19.056550 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-v825p"] Dec 02 20:46:19 crc kubenswrapper[4796]: I1202 20:46:19.062518 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-v825p"] Dec 02 20:46:19 crc kubenswrapper[4796]: I1202 20:46:19.275329 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac1760c-0cd2-480a-8315-4054fc65f81a" path="/var/lib/kubelet/pods/1ac1760c-0cd2-480a-8315-4054fc65f81a/volumes" Dec 02 20:46:20 crc kubenswrapper[4796]: I1202 20:46:20.666417 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh_41224180-82e9-44bf-960c-c1a21df6f98e/util/0.log" Dec 02 20:46:20 crc kubenswrapper[4796]: I1202 20:46:20.808710 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh_41224180-82e9-44bf-960c-c1a21df6f98e/util/0.log" Dec 02 20:46:20 crc kubenswrapper[4796]: I1202 20:46:20.881382 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh_41224180-82e9-44bf-960c-c1a21df6f98e/pull/0.log" Dec 02 20:46:20 crc kubenswrapper[4796]: I1202 20:46:20.925866 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh_41224180-82e9-44bf-960c-c1a21df6f98e/pull/0.log" Dec 02 20:46:21 crc kubenswrapper[4796]: I1202 20:46:21.091171 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh_41224180-82e9-44bf-960c-c1a21df6f98e/util/0.log" Dec 02 20:46:21 crc kubenswrapper[4796]: I1202 20:46:21.092330 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh_41224180-82e9-44bf-960c-c1a21df6f98e/extract/0.log" Dec 02 20:46:21 crc kubenswrapper[4796]: I1202 20:46:21.115495 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzrrh_41224180-82e9-44bf-960c-c1a21df6f98e/pull/0.log" Dec 02 20:46:21 crc kubenswrapper[4796]: I1202 20:46:21.289985 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7_fdf39aad-b46e-4e58-afad-530ece05f9ad/util/0.log" Dec 02 20:46:21 crc kubenswrapper[4796]: I1202 20:46:21.510873 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7_fdf39aad-b46e-4e58-afad-530ece05f9ad/util/0.log" Dec 02 20:46:21 crc kubenswrapper[4796]: I1202 20:46:21.584866 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7_fdf39aad-b46e-4e58-afad-530ece05f9ad/pull/0.log" Dec 02 20:46:21 crc kubenswrapper[4796]: I1202 20:46:21.607958 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7_fdf39aad-b46e-4e58-afad-530ece05f9ad/pull/0.log" Dec 02 20:46:21 crc kubenswrapper[4796]: I1202 20:46:21.776025 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7_fdf39aad-b46e-4e58-afad-530ece05f9ad/util/0.log" Dec 02 20:46:21 crc kubenswrapper[4796]: I1202 20:46:21.789110 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7_fdf39aad-b46e-4e58-afad-530ece05f9ad/pull/0.log" Dec 02 20:46:21 crc kubenswrapper[4796]: I1202 20:46:21.796835 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2qsx7_fdf39aad-b46e-4e58-afad-530ece05f9ad/extract/0.log" Dec 02 20:46:22 crc kubenswrapper[4796]: I1202 20:46:22.039214 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd_1f8fb22d-6596-49d6-a76d-a1952a63c9a3/util/0.log" Dec 02 20:46:22 crc kubenswrapper[4796]: I1202 20:46:22.231236 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd_1f8fb22d-6596-49d6-a76d-a1952a63c9a3/pull/0.log" Dec 02 20:46:22 crc kubenswrapper[4796]: I1202 20:46:22.235145 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd_1f8fb22d-6596-49d6-a76d-a1952a63c9a3/pull/0.log" Dec 02 20:46:22 crc kubenswrapper[4796]: I1202 20:46:22.296550 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd_1f8fb22d-6596-49d6-a76d-a1952a63c9a3/util/0.log" Dec 02 20:46:22 crc kubenswrapper[4796]: I1202 20:46:22.501249 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd_1f8fb22d-6596-49d6-a76d-a1952a63c9a3/util/0.log" Dec 02 20:46:22 crc kubenswrapper[4796]: I1202 20:46:22.509357 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd_1f8fb22d-6596-49d6-a76d-a1952a63c9a3/extract/0.log" Dec 02 20:46:22 crc kubenswrapper[4796]: I1202 20:46:22.516316 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x5xmd_1f8fb22d-6596-49d6-a76d-a1952a63c9a3/pull/0.log" Dec 02 20:46:22 crc kubenswrapper[4796]: I1202 20:46:22.747616 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp_9f40dddc-3877-418d-8f01-9c1ac187cccf/util/0.log" Dec 02 20:46:23 crc kubenswrapper[4796]: I1202 20:46:23.209412 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp_9f40dddc-3877-418d-8f01-9c1ac187cccf/pull/0.log" Dec 02 20:46:23 crc kubenswrapper[4796]: I1202 20:46:23.228104 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp_9f40dddc-3877-418d-8f01-9c1ac187cccf/pull/0.log" Dec 02 20:46:23 crc kubenswrapper[4796]: I1202 20:46:23.278379 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp_9f40dddc-3877-418d-8f01-9c1ac187cccf/util/0.log" Dec 02 20:46:23 crc kubenswrapper[4796]: I1202 20:46:23.495606 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp_9f40dddc-3877-418d-8f01-9c1ac187cccf/pull/0.log" Dec 02 20:46:23 crc kubenswrapper[4796]: I1202 20:46:23.518603 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp_9f40dddc-3877-418d-8f01-9c1ac187cccf/extract/0.log" Dec 02 20:46:23 crc kubenswrapper[4796]: I1202 20:46:23.521356 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83jtdtp_9f40dddc-3877-418d-8f01-9c1ac187cccf/util/0.log" Dec 02 20:46:23 crc kubenswrapper[4796]: I1202 20:46:23.678781 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62m7r_888e8184-64ca-4f8a-8a7e-ccf06695e6ec/extract-utilities/0.log" Dec 02 20:46:23 crc kubenswrapper[4796]: I1202 20:46:23.900745 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62m7r_888e8184-64ca-4f8a-8a7e-ccf06695e6ec/extract-content/0.log" Dec 02 20:46:23 crc kubenswrapper[4796]: I1202 20:46:23.920128 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62m7r_888e8184-64ca-4f8a-8a7e-ccf06695e6ec/extract-utilities/0.log" Dec 02 20:46:23 crc kubenswrapper[4796]: I1202 20:46:23.993003 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62m7r_888e8184-64ca-4f8a-8a7e-ccf06695e6ec/extract-content/0.log" Dec 02 20:46:24 crc kubenswrapper[4796]: I1202 20:46:24.137361 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62m7r_888e8184-64ca-4f8a-8a7e-ccf06695e6ec/extract-utilities/0.log" Dec 02 20:46:24 crc kubenswrapper[4796]: I1202 20:46:24.189773 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62m7r_888e8184-64ca-4f8a-8a7e-ccf06695e6ec/extract-content/0.log" Dec 02 20:46:24 crc kubenswrapper[4796]: I1202 20:46:24.382704 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tlm4b_6f6ec00d-4ac5-4442-a915-26d7109a0eac/extract-utilities/0.log" Dec 02 20:46:24 crc kubenswrapper[4796]: I1202 20:46:24.524004 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62m7r_888e8184-64ca-4f8a-8a7e-ccf06695e6ec/registry-server/0.log" Dec 02 20:46:24 crc kubenswrapper[4796]: I1202 20:46:24.589922 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tlm4b_6f6ec00d-4ac5-4442-a915-26d7109a0eac/extract-content/0.log" Dec 02 20:46:24 crc kubenswrapper[4796]: I1202 20:46:24.599564 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tlm4b_6f6ec00d-4ac5-4442-a915-26d7109a0eac/extract-content/0.log" Dec 02 20:46:24 crc kubenswrapper[4796]: I1202 20:46:24.629406 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tlm4b_6f6ec00d-4ac5-4442-a915-26d7109a0eac/extract-utilities/0.log" Dec 02 20:46:24 crc kubenswrapper[4796]: I1202 20:46:24.795942 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tlm4b_6f6ec00d-4ac5-4442-a915-26d7109a0eac/extract-utilities/0.log" Dec 02 20:46:24 crc kubenswrapper[4796]: I1202 20:46:24.834010 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tlm4b_6f6ec00d-4ac5-4442-a915-26d7109a0eac/extract-content/0.log" Dec 02 20:46:24 crc kubenswrapper[4796]: I1202 20:46:24.842450 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fgrrl_93e137a7-0366-483c-afc7-549b29d6c04d/marketplace-operator/0.log" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.019106 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p5k9r_7cb034af-9ad4-463d-8017-9df8e62d0a24/extract-utilities/0.log" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.191429 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.192650 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.192708 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.193330 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4b16f65e6cf8140dac6fd968d52d0328394c28b2ff197e938fef2e061c5b2e3"} pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.193380 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" containerID="cri-o://c4b16f65e6cf8140dac6fd968d52d0328394c28b2ff197e938fef2e061c5b2e3" gracePeriod=600 Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.375114 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p5k9r_7cb034af-9ad4-463d-8017-9df8e62d0a24/extract-utilities/0.log" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.389754 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tlm4b_6f6ec00d-4ac5-4442-a915-26d7109a0eac/registry-server/0.log" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.403207 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p5k9r_7cb034af-9ad4-463d-8017-9df8e62d0a24/extract-content/0.log" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.452347 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p5k9r_7cb034af-9ad4-463d-8017-9df8e62d0a24/extract-content/0.log" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.679498 4796 generic.go:334] "Generic (PLEG): container finished" podID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerID="c4b16f65e6cf8140dac6fd968d52d0328394c28b2ff197e938fef2e061c5b2e3" exitCode=0 Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.679571 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerDied","Data":"c4b16f65e6cf8140dac6fd968d52d0328394c28b2ff197e938fef2e061c5b2e3"} Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.679626 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerStarted","Data":"d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6"} Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.679671 4796 scope.go:117] "RemoveContainer" containerID="a3b1e4c1e71e5db50d52130283a4f488c4d2f8d9bfa463ec7357e0975e3daac1" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.695475 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p5k9r_7cb034af-9ad4-463d-8017-9df8e62d0a24/registry-server/0.log" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.852355 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p75xc_f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd/extract-utilities/0.log" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.878128 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p5k9r_7cb034af-9ad4-463d-8017-9df8e62d0a24/extract-content/0.log" Dec 02 20:46:25 crc kubenswrapper[4796]: I1202 20:46:25.878538 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p5k9r_7cb034af-9ad4-463d-8017-9df8e62d0a24/extract-utilities/0.log" Dec 02 20:46:26 crc kubenswrapper[4796]: I1202 20:46:26.379114 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p75xc_f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd/extract-utilities/0.log" Dec 02 20:46:26 crc kubenswrapper[4796]: I1202 20:46:26.382125 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p75xc_f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd/extract-content/0.log" Dec 02 20:46:26 crc kubenswrapper[4796]: I1202 20:46:26.382343 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p75xc_f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd/extract-content/0.log" Dec 02 20:46:26 crc kubenswrapper[4796]: I1202 20:46:26.576713 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p75xc_f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd/extract-utilities/0.log" Dec 02 20:46:26 crc kubenswrapper[4796]: I1202 20:46:26.603711 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p75xc_f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd/extract-content/0.log" Dec 02 20:46:26 crc kubenswrapper[4796]: I1202 20:46:26.963832 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p75xc_f4b761ec-a0de-4a6b-bfcc-85d9de6fd4cd/registry-server/0.log" Dec 02 20:46:29 crc kubenswrapper[4796]: I1202 20:46:29.658141 4796 scope.go:117] "RemoveContainer" containerID="e1ecd5375a0d4335d7df648084177b4caf6c9c39a105b64285947284d2114186" Dec 02 20:46:29 crc kubenswrapper[4796]: I1202 20:46:29.701884 4796 scope.go:117] "RemoveContainer" containerID="a08852c95b74cda80ada9b1f264df34eab70c672cb51e3d48cee59f9e08fba17" Dec 02 20:46:29 crc kubenswrapper[4796]: I1202 20:46:29.734949 4796 scope.go:117] "RemoveContainer" containerID="892e4a8ab47a0b3db0f74d800b4a8611cc56b7618b6f40d6a78f208bb375e871" Dec 02 20:46:29 crc kubenswrapper[4796]: I1202 20:46:29.794804 4796 scope.go:117] "RemoveContainer" containerID="f73d9e9a00b60d8a61dd714a5e0b8347b03577df9dba425911db7b5e0b3b0659" Dec 02 20:46:43 crc kubenswrapper[4796]: I1202 20:46:43.352820 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-wpmjh_b8d99ae4-a394-4cd6-89c6-9fe019b477df/prometheus-operator/0.log" Dec 02 20:46:43 crc kubenswrapper[4796]: I1202 20:46:43.589045 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7559cc484f-g6rdl_73b60abc-2642-4af2-b0e5-263c48ca6f05/prometheus-operator-admission-webhook/0.log" Dec 02 20:46:43 crc kubenswrapper[4796]: I1202 20:46:43.630288 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7559cc484f-jfg4w_0c2b967c-bf5c-41e0-8d3a-763881157417/prometheus-operator-admission-webhook/0.log" Dec 02 20:46:43 crc kubenswrapper[4796]: I1202 20:46:43.778196 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-dd8nq_e3d237b8-9c00-4ed9-b441-822ba51a7ed5/operator/0.log" Dec 02 20:46:43 crc kubenswrapper[4796]: I1202 20:46:43.862175 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-94mrc_66c828b2-afdf-4b46-9199-a7c0ceaf3942/observability-ui-dashboards/0.log" Dec 02 20:46:43 crc kubenswrapper[4796]: I1202 20:46:43.997065 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-ww4jg_87139126-df78-43da-984c-e8f633bc52a9/perses-operator/0.log" Dec 02 20:47:29 crc kubenswrapper[4796]: I1202 20:47:29.925866 4796 scope.go:117] "RemoveContainer" containerID="5e32fe3c85dd13bb7181071b88d8bc75cc19007a9d13eb6b3ad747eb21be075a" Dec 02 20:47:29 crc kubenswrapper[4796]: I1202 20:47:29.965042 4796 scope.go:117] "RemoveContainer" containerID="6814b5644c4d6d26205ece9524574ea56e291c46dbad48c7fd87e2efeb1ea969" Dec 02 20:47:30 crc kubenswrapper[4796]: I1202 20:47:30.021640 4796 scope.go:117] "RemoveContainer" containerID="68c160e52360bdf8d864992702819761d715af592fe7dc3fcc9537e4410bd2fc" Dec 02 20:47:30 crc kubenswrapper[4796]: I1202 20:47:30.056037 4796 scope.go:117] "RemoveContainer" containerID="5507c92e0ac9a759849bf26d9bf78c6635cb3662fa32177933fa5448873a326f" Dec 02 20:47:30 crc kubenswrapper[4796]: I1202 20:47:30.084538 4796 scope.go:117] "RemoveContainer" containerID="37e29902cdc4dac45535ca62747b39587bfef777aab9d293c1381ad5c7d876c1" Dec 02 20:47:30 crc kubenswrapper[4796]: I1202 20:47:30.121076 4796 scope.go:117] "RemoveContainer" containerID="d792899726d8331d184f4d39221b083e056ec1e34d1f4bacc163c1478c6dc799" Dec 02 20:47:30 crc kubenswrapper[4796]: I1202 20:47:30.156965 4796 scope.go:117] "RemoveContainer" containerID="a1883ed18e1868e34754ad44fa243126f21f33f232e552f8b1fa5f8e79a34e7c" Dec 02 20:47:30 crc kubenswrapper[4796]: I1202 20:47:30.173802 4796 scope.go:117] "RemoveContainer" containerID="a7ef55e420b57fc9c2d8e9534b7e6f99c9732ebe3f31478e5c34ca6618669192" Dec 02 20:47:45 crc kubenswrapper[4796]: I1202 20:47:45.420667 4796 generic.go:334] "Generic (PLEG): container finished" podID="c58d5df7-1f09-4316-863f-c1e7d907fd55" containerID="8e9a889f95d7ae9fc6a1f95cf1e9ad54157a09c5fbbc78e782f3e8f68daa6fba" exitCode=0 Dec 02 20:47:45 crc kubenswrapper[4796]: I1202 20:47:45.421403 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xz7jb/must-gather-pnp66" event={"ID":"c58d5df7-1f09-4316-863f-c1e7d907fd55","Type":"ContainerDied","Data":"8e9a889f95d7ae9fc6a1f95cf1e9ad54157a09c5fbbc78e782f3e8f68daa6fba"} Dec 02 20:47:45 crc kubenswrapper[4796]: I1202 20:47:45.422012 4796 scope.go:117] "RemoveContainer" containerID="8e9a889f95d7ae9fc6a1f95cf1e9ad54157a09c5fbbc78e782f3e8f68daa6fba" Dec 02 20:47:46 crc kubenswrapper[4796]: I1202 20:47:46.404434 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xz7jb_must-gather-pnp66_c58d5df7-1f09-4316-863f-c1e7d907fd55/gather/0.log" Dec 02 20:47:53 crc kubenswrapper[4796]: I1202 20:47:53.358448 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xz7jb/must-gather-pnp66"] Dec 02 20:47:53 crc kubenswrapper[4796]: I1202 20:47:53.359682 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xz7jb/must-gather-pnp66" podUID="c58d5df7-1f09-4316-863f-c1e7d907fd55" containerName="copy" containerID="cri-o://63139767a2a77ae4d332a33939470096dab4be936da23652153cb644282262ff" gracePeriod=2 Dec 02 20:47:53 crc kubenswrapper[4796]: I1202 20:47:53.367056 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xz7jb/must-gather-pnp66"] Dec 02 20:47:53 crc kubenswrapper[4796]: I1202 20:47:53.491805 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xz7jb_must-gather-pnp66_c58d5df7-1f09-4316-863f-c1e7d907fd55/copy/0.log" Dec 02 20:47:53 crc kubenswrapper[4796]: I1202 20:47:53.492236 4796 generic.go:334] "Generic (PLEG): container finished" podID="c58d5df7-1f09-4316-863f-c1e7d907fd55" containerID="63139767a2a77ae4d332a33939470096dab4be936da23652153cb644282262ff" exitCode=143 Dec 02 20:47:53 crc kubenswrapper[4796]: I1202 20:47:53.851784 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xz7jb_must-gather-pnp66_c58d5df7-1f09-4316-863f-c1e7d907fd55/copy/0.log" Dec 02 20:47:53 crc kubenswrapper[4796]: I1202 20:47:53.852650 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xz7jb/must-gather-pnp66" Dec 02 20:47:54 crc kubenswrapper[4796]: I1202 20:47:54.010674 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c58d5df7-1f09-4316-863f-c1e7d907fd55-must-gather-output\") pod \"c58d5df7-1f09-4316-863f-c1e7d907fd55\" (UID: \"c58d5df7-1f09-4316-863f-c1e7d907fd55\") " Dec 02 20:47:54 crc kubenswrapper[4796]: I1202 20:47:54.010851 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z48fq\" (UniqueName: \"kubernetes.io/projected/c58d5df7-1f09-4316-863f-c1e7d907fd55-kube-api-access-z48fq\") pod \"c58d5df7-1f09-4316-863f-c1e7d907fd55\" (UID: \"c58d5df7-1f09-4316-863f-c1e7d907fd55\") " Dec 02 20:47:54 crc kubenswrapper[4796]: I1202 20:47:54.019931 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c58d5df7-1f09-4316-863f-c1e7d907fd55-kube-api-access-z48fq" (OuterVolumeSpecName: "kube-api-access-z48fq") pod "c58d5df7-1f09-4316-863f-c1e7d907fd55" (UID: "c58d5df7-1f09-4316-863f-c1e7d907fd55"). InnerVolumeSpecName "kube-api-access-z48fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:47:54 crc kubenswrapper[4796]: I1202 20:47:54.113046 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z48fq\" (UniqueName: \"kubernetes.io/projected/c58d5df7-1f09-4316-863f-c1e7d907fd55-kube-api-access-z48fq\") on node \"crc\" DevicePath \"\"" Dec 02 20:47:54 crc kubenswrapper[4796]: I1202 20:47:54.155048 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c58d5df7-1f09-4316-863f-c1e7d907fd55-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c58d5df7-1f09-4316-863f-c1e7d907fd55" (UID: "c58d5df7-1f09-4316-863f-c1e7d907fd55"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:47:54 crc kubenswrapper[4796]: I1202 20:47:54.215480 4796 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c58d5df7-1f09-4316-863f-c1e7d907fd55-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 02 20:47:54 crc kubenswrapper[4796]: I1202 20:47:54.500983 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xz7jb_must-gather-pnp66_c58d5df7-1f09-4316-863f-c1e7d907fd55/copy/0.log" Dec 02 20:47:54 crc kubenswrapper[4796]: I1202 20:47:54.501402 4796 scope.go:117] "RemoveContainer" containerID="63139767a2a77ae4d332a33939470096dab4be936da23652153cb644282262ff" Dec 02 20:47:54 crc kubenswrapper[4796]: I1202 20:47:54.501473 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xz7jb/must-gather-pnp66" Dec 02 20:47:54 crc kubenswrapper[4796]: I1202 20:47:54.524499 4796 scope.go:117] "RemoveContainer" containerID="8e9a889f95d7ae9fc6a1f95cf1e9ad54157a09c5fbbc78e782f3e8f68daa6fba" Dec 02 20:47:55 crc kubenswrapper[4796]: I1202 20:47:55.278145 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c58d5df7-1f09-4316-863f-c1e7d907fd55" path="/var/lib/kubelet/pods/c58d5df7-1f09-4316-863f-c1e7d907fd55/volumes" Dec 02 20:48:25 crc kubenswrapper[4796]: I1202 20:48:25.189479 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:48:25 crc kubenswrapper[4796]: I1202 20:48:25.190463 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:48:30 crc kubenswrapper[4796]: I1202 20:48:30.327742 4796 scope.go:117] "RemoveContainer" containerID="f341b1f62ab5a2a27b153bc2905485e78832530b025c3dd18a5ba677d14c86ec" Dec 02 20:48:30 crc kubenswrapper[4796]: I1202 20:48:30.358581 4796 scope.go:117] "RemoveContainer" containerID="dbb81e850aeae73d7e6bc245ef50494ff00824dd6b3ec462280a24ced5626c50" Dec 02 20:48:55 crc kubenswrapper[4796]: I1202 20:48:55.189447 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:48:55 crc kubenswrapper[4796]: I1202 20:48:55.190317 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:49:25 crc kubenswrapper[4796]: I1202 20:49:25.189732 4796 patch_prober.go:28] interesting pod/machine-config-daemon-wzhpq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 20:49:25 crc kubenswrapper[4796]: I1202 20:49:25.190491 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 20:49:25 crc kubenswrapper[4796]: I1202 20:49:25.190560 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" Dec 02 20:49:25 crc kubenswrapper[4796]: I1202 20:49:25.191577 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6"} pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 20:49:25 crc kubenswrapper[4796]: I1202 20:49:25.191667 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerName="machine-config-daemon" containerID="cri-o://d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" gracePeriod=600 Dec 02 20:49:25 crc kubenswrapper[4796]: I1202 20:49:25.521824 4796 generic.go:334] "Generic (PLEG): container finished" podID="5558dc7c-93f9-4212-bf22-fdec743e47ee" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" exitCode=0 Dec 02 20:49:25 crc kubenswrapper[4796]: I1202 20:49:25.521896 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" event={"ID":"5558dc7c-93f9-4212-bf22-fdec743e47ee","Type":"ContainerDied","Data":"d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6"} Dec 02 20:49:25 crc kubenswrapper[4796]: I1202 20:49:25.522349 4796 scope.go:117] "RemoveContainer" containerID="c4b16f65e6cf8140dac6fd968d52d0328394c28b2ff197e938fef2e061c5b2e3" Dec 02 20:49:25 crc kubenswrapper[4796]: E1202 20:49:25.826874 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:49:26 crc kubenswrapper[4796]: I1202 20:49:26.533155 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:49:26 crc kubenswrapper[4796]: E1202 20:49:26.535423 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.241057 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cn2bd"] Dec 02 20:49:35 crc kubenswrapper[4796]: E1202 20:49:35.242675 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" containerName="extract-utilities" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.242696 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" containerName="extract-utilities" Dec 02 20:49:35 crc kubenswrapper[4796]: E1202 20:49:35.242712 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" containerName="registry-server" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.242742 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" containerName="registry-server" Dec 02 20:49:35 crc kubenswrapper[4796]: E1202 20:49:35.242759 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58d5df7-1f09-4316-863f-c1e7d907fd55" containerName="gather" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.242766 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58d5df7-1f09-4316-863f-c1e7d907fd55" containerName="gather" Dec 02 20:49:35 crc kubenswrapper[4796]: E1202 20:49:35.242793 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" containerName="extract-content" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.242801 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" containerName="extract-content" Dec 02 20:49:35 crc kubenswrapper[4796]: E1202 20:49:35.242819 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58d5df7-1f09-4316-863f-c1e7d907fd55" containerName="copy" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.242826 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58d5df7-1f09-4316-863f-c1e7d907fd55" containerName="copy" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.243057 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58d5df7-1f09-4316-863f-c1e7d907fd55" containerName="gather" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.243097 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58d5df7-1f09-4316-863f-c1e7d907fd55" containerName="copy" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.243110 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ca18a2-16ec-41f9-9d1f-7107e7bbaca4" containerName="registry-server" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.245026 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.260477 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn2bd"] Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.380054 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3518b59-ff54-4239-a725-ab324bc92f78-utilities\") pod \"community-operators-cn2bd\" (UID: \"f3518b59-ff54-4239-a725-ab324bc92f78\") " pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.380134 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3518b59-ff54-4239-a725-ab324bc92f78-catalog-content\") pod \"community-operators-cn2bd\" (UID: \"f3518b59-ff54-4239-a725-ab324bc92f78\") " pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.380183 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45s2\" (UniqueName: \"kubernetes.io/projected/f3518b59-ff54-4239-a725-ab324bc92f78-kube-api-access-b45s2\") pod \"community-operators-cn2bd\" (UID: \"f3518b59-ff54-4239-a725-ab324bc92f78\") " pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.481947 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3518b59-ff54-4239-a725-ab324bc92f78-utilities\") pod \"community-operators-cn2bd\" (UID: \"f3518b59-ff54-4239-a725-ab324bc92f78\") " pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.482066 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3518b59-ff54-4239-a725-ab324bc92f78-catalog-content\") pod \"community-operators-cn2bd\" (UID: \"f3518b59-ff54-4239-a725-ab324bc92f78\") " pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.482131 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b45s2\" (UniqueName: \"kubernetes.io/projected/f3518b59-ff54-4239-a725-ab324bc92f78-kube-api-access-b45s2\") pod \"community-operators-cn2bd\" (UID: \"f3518b59-ff54-4239-a725-ab324bc92f78\") " pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.482550 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3518b59-ff54-4239-a725-ab324bc92f78-utilities\") pod \"community-operators-cn2bd\" (UID: \"f3518b59-ff54-4239-a725-ab324bc92f78\") " pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.482619 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3518b59-ff54-4239-a725-ab324bc92f78-catalog-content\") pod \"community-operators-cn2bd\" (UID: \"f3518b59-ff54-4239-a725-ab324bc92f78\") " pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.500989 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b45s2\" (UniqueName: \"kubernetes.io/projected/f3518b59-ff54-4239-a725-ab324bc92f78-kube-api-access-b45s2\") pod \"community-operators-cn2bd\" (UID: \"f3518b59-ff54-4239-a725-ab324bc92f78\") " pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:35 crc kubenswrapper[4796]: I1202 20:49:35.577537 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:36 crc kubenswrapper[4796]: I1202 20:49:36.086307 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn2bd"] Dec 02 20:49:36 crc kubenswrapper[4796]: I1202 20:49:36.634726 4796 generic.go:334] "Generic (PLEG): container finished" podID="f3518b59-ff54-4239-a725-ab324bc92f78" containerID="19d4ca5935d9f2b6f94568e5a80966d2808fab9690a0f5bba895916eac912e42" exitCode=0 Dec 02 20:49:36 crc kubenswrapper[4796]: I1202 20:49:36.634918 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn2bd" event={"ID":"f3518b59-ff54-4239-a725-ab324bc92f78","Type":"ContainerDied","Data":"19d4ca5935d9f2b6f94568e5a80966d2808fab9690a0f5bba895916eac912e42"} Dec 02 20:49:36 crc kubenswrapper[4796]: I1202 20:49:36.635111 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn2bd" event={"ID":"f3518b59-ff54-4239-a725-ab324bc92f78","Type":"ContainerStarted","Data":"700486c756b19e5af0ccc849bb24cf173a135e07393a0c64e732ac2284636aa3"} Dec 02 20:49:36 crc kubenswrapper[4796]: I1202 20:49:36.636960 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 20:49:37 crc kubenswrapper[4796]: I1202 20:49:37.645607 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn2bd" event={"ID":"f3518b59-ff54-4239-a725-ab324bc92f78","Type":"ContainerStarted","Data":"bdd01038db23dd45289260cd2084fc11617a18901515249bc702e67515a1127c"} Dec 02 20:49:38 crc kubenswrapper[4796]: I1202 20:49:38.659390 4796 generic.go:334] "Generic (PLEG): container finished" podID="f3518b59-ff54-4239-a725-ab324bc92f78" containerID="bdd01038db23dd45289260cd2084fc11617a18901515249bc702e67515a1127c" exitCode=0 Dec 02 20:49:38 crc kubenswrapper[4796]: I1202 20:49:38.659562 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn2bd" event={"ID":"f3518b59-ff54-4239-a725-ab324bc92f78","Type":"ContainerDied","Data":"bdd01038db23dd45289260cd2084fc11617a18901515249bc702e67515a1127c"} Dec 02 20:49:39 crc kubenswrapper[4796]: I1202 20:49:39.674480 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn2bd" event={"ID":"f3518b59-ff54-4239-a725-ab324bc92f78","Type":"ContainerStarted","Data":"cc38d85afee1abb1f826543eb510fad48fb24c509b8bae70c0bd860cfc8840a9"} Dec 02 20:49:39 crc kubenswrapper[4796]: I1202 20:49:39.704935 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cn2bd" podStartSLOduration=2.060997453 podStartE2EDuration="4.704916726s" podCreationTimestamp="2025-12-02 20:49:35 +0000 UTC" firstStartedPulling="2025-12-02 20:49:36.636652957 +0000 UTC m=+2259.640028511" lastFinishedPulling="2025-12-02 20:49:39.28057224 +0000 UTC m=+2262.283947784" observedRunningTime="2025-12-02 20:49:39.700723825 +0000 UTC m=+2262.704099379" watchObservedRunningTime="2025-12-02 20:49:39.704916726 +0000 UTC m=+2262.708292260" Dec 02 20:49:40 crc kubenswrapper[4796]: I1202 20:49:40.265284 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:49:40 crc kubenswrapper[4796]: E1202 20:49:40.266028 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:49:45 crc kubenswrapper[4796]: I1202 20:49:45.578083 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:45 crc kubenswrapper[4796]: I1202 20:49:45.578786 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:45 crc kubenswrapper[4796]: I1202 20:49:45.636880 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:45 crc kubenswrapper[4796]: I1202 20:49:45.794286 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:49 crc kubenswrapper[4796]: I1202 20:49:49.212178 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn2bd"] Dec 02 20:49:49 crc kubenswrapper[4796]: I1202 20:49:49.212696 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cn2bd" podUID="f3518b59-ff54-4239-a725-ab324bc92f78" containerName="registry-server" containerID="cri-o://cc38d85afee1abb1f826543eb510fad48fb24c509b8bae70c0bd860cfc8840a9" gracePeriod=2 Dec 02 20:49:49 crc kubenswrapper[4796]: I1202 20:49:49.768182 4796 generic.go:334] "Generic (PLEG): container finished" podID="f3518b59-ff54-4239-a725-ab324bc92f78" containerID="cc38d85afee1abb1f826543eb510fad48fb24c509b8bae70c0bd860cfc8840a9" exitCode=0 Dec 02 20:49:49 crc kubenswrapper[4796]: I1202 20:49:49.768276 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn2bd" event={"ID":"f3518b59-ff54-4239-a725-ab324bc92f78","Type":"ContainerDied","Data":"cc38d85afee1abb1f826543eb510fad48fb24c509b8bae70c0bd860cfc8840a9"} Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.239944 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.380217 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3518b59-ff54-4239-a725-ab324bc92f78-catalog-content\") pod \"f3518b59-ff54-4239-a725-ab324bc92f78\" (UID: \"f3518b59-ff54-4239-a725-ab324bc92f78\") " Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.380535 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b45s2\" (UniqueName: \"kubernetes.io/projected/f3518b59-ff54-4239-a725-ab324bc92f78-kube-api-access-b45s2\") pod \"f3518b59-ff54-4239-a725-ab324bc92f78\" (UID: \"f3518b59-ff54-4239-a725-ab324bc92f78\") " Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.380725 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3518b59-ff54-4239-a725-ab324bc92f78-utilities\") pod \"f3518b59-ff54-4239-a725-ab324bc92f78\" (UID: \"f3518b59-ff54-4239-a725-ab324bc92f78\") " Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.381651 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3518b59-ff54-4239-a725-ab324bc92f78-utilities" (OuterVolumeSpecName: "utilities") pod "f3518b59-ff54-4239-a725-ab324bc92f78" (UID: "f3518b59-ff54-4239-a725-ab324bc92f78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.393499 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3518b59-ff54-4239-a725-ab324bc92f78-kube-api-access-b45s2" (OuterVolumeSpecName: "kube-api-access-b45s2") pod "f3518b59-ff54-4239-a725-ab324bc92f78" (UID: "f3518b59-ff54-4239-a725-ab324bc92f78"). InnerVolumeSpecName "kube-api-access-b45s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.429752 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3518b59-ff54-4239-a725-ab324bc92f78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3518b59-ff54-4239-a725-ab324bc92f78" (UID: "f3518b59-ff54-4239-a725-ab324bc92f78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.483118 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3518b59-ff54-4239-a725-ab324bc92f78-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.483153 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3518b59-ff54-4239-a725-ab324bc92f78-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.483168 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b45s2\" (UniqueName: \"kubernetes.io/projected/f3518b59-ff54-4239-a725-ab324bc92f78-kube-api-access-b45s2\") on node \"crc\" DevicePath \"\"" Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.780505 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn2bd" event={"ID":"f3518b59-ff54-4239-a725-ab324bc92f78","Type":"ContainerDied","Data":"700486c756b19e5af0ccc849bb24cf173a135e07393a0c64e732ac2284636aa3"} Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.780573 4796 scope.go:117] "RemoveContainer" containerID="cc38d85afee1abb1f826543eb510fad48fb24c509b8bae70c0bd860cfc8840a9" Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.780740 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn2bd" Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.824023 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn2bd"] Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.827291 4796 scope.go:117] "RemoveContainer" containerID="bdd01038db23dd45289260cd2084fc11617a18901515249bc702e67515a1127c" Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.830535 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cn2bd"] Dec 02 20:49:50 crc kubenswrapper[4796]: I1202 20:49:50.888636 4796 scope.go:117] "RemoveContainer" containerID="19d4ca5935d9f2b6f94568e5a80966d2808fab9690a0f5bba895916eac912e42" Dec 02 20:49:51 crc kubenswrapper[4796]: I1202 20:49:51.283844 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3518b59-ff54-4239-a725-ab324bc92f78" path="/var/lib/kubelet/pods/f3518b59-ff54-4239-a725-ab324bc92f78/volumes" Dec 02 20:49:52 crc kubenswrapper[4796]: I1202 20:49:52.264945 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:49:52 crc kubenswrapper[4796]: E1202 20:49:52.265501 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:50:04 crc kubenswrapper[4796]: I1202 20:50:04.265535 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:50:04 crc kubenswrapper[4796]: E1202 20:50:04.266418 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:50:19 crc kubenswrapper[4796]: I1202 20:50:19.265048 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:50:19 crc kubenswrapper[4796]: E1202 20:50:19.266229 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:50:32 crc kubenswrapper[4796]: I1202 20:50:32.264846 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:50:32 crc kubenswrapper[4796]: E1202 20:50:32.265767 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:50:43 crc kubenswrapper[4796]: I1202 20:50:43.264798 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:50:43 crc kubenswrapper[4796]: E1202 20:50:43.265394 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:50:49 crc kubenswrapper[4796]: I1202 20:50:49.840174 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jrzxs"] Dec 02 20:50:49 crc kubenswrapper[4796]: E1202 20:50:49.841427 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3518b59-ff54-4239-a725-ab324bc92f78" containerName="extract-content" Dec 02 20:50:49 crc kubenswrapper[4796]: I1202 20:50:49.841450 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3518b59-ff54-4239-a725-ab324bc92f78" containerName="extract-content" Dec 02 20:50:49 crc kubenswrapper[4796]: E1202 20:50:49.841495 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3518b59-ff54-4239-a725-ab324bc92f78" containerName="extract-utilities" Dec 02 20:50:49 crc kubenswrapper[4796]: I1202 20:50:49.841506 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3518b59-ff54-4239-a725-ab324bc92f78" containerName="extract-utilities" Dec 02 20:50:49 crc kubenswrapper[4796]: E1202 20:50:49.841529 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3518b59-ff54-4239-a725-ab324bc92f78" containerName="registry-server" Dec 02 20:50:49 crc kubenswrapper[4796]: I1202 20:50:49.841538 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3518b59-ff54-4239-a725-ab324bc92f78" containerName="registry-server" Dec 02 20:50:49 crc kubenswrapper[4796]: I1202 20:50:49.841757 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3518b59-ff54-4239-a725-ab324bc92f78" containerName="registry-server" Dec 02 20:50:49 crc kubenswrapper[4796]: I1202 20:50:49.843180 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:50:49 crc kubenswrapper[4796]: I1202 20:50:49.854979 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrzxs"] Dec 02 20:50:49 crc kubenswrapper[4796]: I1202 20:50:49.947139 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7spw\" (UniqueName: \"kubernetes.io/projected/81febbc4-d060-481c-aef0-51eab2c55935-kube-api-access-m7spw\") pod \"redhat-marketplace-jrzxs\" (UID: \"81febbc4-d060-481c-aef0-51eab2c55935\") " pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:50:49 crc kubenswrapper[4796]: I1202 20:50:49.947197 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81febbc4-d060-481c-aef0-51eab2c55935-catalog-content\") pod \"redhat-marketplace-jrzxs\" (UID: \"81febbc4-d060-481c-aef0-51eab2c55935\") " pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:50:49 crc kubenswrapper[4796]: I1202 20:50:49.947361 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81febbc4-d060-481c-aef0-51eab2c55935-utilities\") pod \"redhat-marketplace-jrzxs\" (UID: \"81febbc4-d060-481c-aef0-51eab2c55935\") " pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:50:50 crc kubenswrapper[4796]: I1202 20:50:50.049100 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81febbc4-d060-481c-aef0-51eab2c55935-utilities\") pod \"redhat-marketplace-jrzxs\" (UID: \"81febbc4-d060-481c-aef0-51eab2c55935\") " pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:50:50 crc kubenswrapper[4796]: I1202 20:50:50.049223 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7spw\" (UniqueName: \"kubernetes.io/projected/81febbc4-d060-481c-aef0-51eab2c55935-kube-api-access-m7spw\") pod \"redhat-marketplace-jrzxs\" (UID: \"81febbc4-d060-481c-aef0-51eab2c55935\") " pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:50:50 crc kubenswrapper[4796]: I1202 20:50:50.049270 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81febbc4-d060-481c-aef0-51eab2c55935-catalog-content\") pod \"redhat-marketplace-jrzxs\" (UID: \"81febbc4-d060-481c-aef0-51eab2c55935\") " pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:50:50 crc kubenswrapper[4796]: I1202 20:50:50.049805 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81febbc4-d060-481c-aef0-51eab2c55935-catalog-content\") pod \"redhat-marketplace-jrzxs\" (UID: \"81febbc4-d060-481c-aef0-51eab2c55935\") " pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:50:50 crc kubenswrapper[4796]: I1202 20:50:50.050094 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81febbc4-d060-481c-aef0-51eab2c55935-utilities\") pod \"redhat-marketplace-jrzxs\" (UID: \"81febbc4-d060-481c-aef0-51eab2c55935\") " pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:50:50 crc kubenswrapper[4796]: I1202 20:50:50.071329 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7spw\" (UniqueName: \"kubernetes.io/projected/81febbc4-d060-481c-aef0-51eab2c55935-kube-api-access-m7spw\") pod \"redhat-marketplace-jrzxs\" (UID: \"81febbc4-d060-481c-aef0-51eab2c55935\") " pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:50:50 crc kubenswrapper[4796]: I1202 20:50:50.172294 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:50:50 crc kubenswrapper[4796]: I1202 20:50:50.758014 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrzxs"] Dec 02 20:50:51 crc kubenswrapper[4796]: I1202 20:50:51.384104 4796 generic.go:334] "Generic (PLEG): container finished" podID="81febbc4-d060-481c-aef0-51eab2c55935" containerID="d6fc998facdc3e2682b10eb83c7b12c5b03e001be26e6795dabf926ec7d88a54" exitCode=0 Dec 02 20:50:51 crc kubenswrapper[4796]: I1202 20:50:51.384173 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrzxs" event={"ID":"81febbc4-d060-481c-aef0-51eab2c55935","Type":"ContainerDied","Data":"d6fc998facdc3e2682b10eb83c7b12c5b03e001be26e6795dabf926ec7d88a54"} Dec 02 20:50:51 crc kubenswrapper[4796]: I1202 20:50:51.384215 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrzxs" event={"ID":"81febbc4-d060-481c-aef0-51eab2c55935","Type":"ContainerStarted","Data":"661b5cf90c97b5fe1b690f0be93f4c622a88060018127a6b9648b6e4a1df8d7d"} Dec 02 20:50:52 crc kubenswrapper[4796]: I1202 20:50:52.402582 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrzxs" event={"ID":"81febbc4-d060-481c-aef0-51eab2c55935","Type":"ContainerStarted","Data":"a85922d3e5d149b041ee938054a8dbf2ebd5117e5e247dddb46f2d4960ff4bac"} Dec 02 20:50:53 crc kubenswrapper[4796]: I1202 20:50:53.418575 4796 generic.go:334] "Generic (PLEG): container finished" podID="81febbc4-d060-481c-aef0-51eab2c55935" containerID="a85922d3e5d149b041ee938054a8dbf2ebd5117e5e247dddb46f2d4960ff4bac" exitCode=0 Dec 02 20:50:53 crc kubenswrapper[4796]: I1202 20:50:53.419017 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrzxs" event={"ID":"81febbc4-d060-481c-aef0-51eab2c55935","Type":"ContainerDied","Data":"a85922d3e5d149b041ee938054a8dbf2ebd5117e5e247dddb46f2d4960ff4bac"} Dec 02 20:50:53 crc kubenswrapper[4796]: I1202 20:50:53.419064 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrzxs" event={"ID":"81febbc4-d060-481c-aef0-51eab2c55935","Type":"ContainerStarted","Data":"7117b923fcae14eb68354272f1f3375416c8e05a4dfbf5656d59b1a52cf4cd2c"} Dec 02 20:50:53 crc kubenswrapper[4796]: I1202 20:50:53.442334 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jrzxs" podStartSLOduration=2.9965465079999998 podStartE2EDuration="4.442313447s" podCreationTimestamp="2025-12-02 20:50:49 +0000 UTC" firstStartedPulling="2025-12-02 20:50:51.389843713 +0000 UTC m=+2334.393219287" lastFinishedPulling="2025-12-02 20:50:52.835610662 +0000 UTC m=+2335.838986226" observedRunningTime="2025-12-02 20:50:53.439707743 +0000 UTC m=+2336.443083277" watchObservedRunningTime="2025-12-02 20:50:53.442313447 +0000 UTC m=+2336.445689001" Dec 02 20:50:54 crc kubenswrapper[4796]: I1202 20:50:54.265159 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:50:54 crc kubenswrapper[4796]: E1202 20:50:54.266180 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:51:00 crc kubenswrapper[4796]: I1202 20:51:00.173359 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:51:00 crc kubenswrapper[4796]: I1202 20:51:00.174136 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:51:00 crc kubenswrapper[4796]: I1202 20:51:00.279775 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:51:00 crc kubenswrapper[4796]: I1202 20:51:00.616347 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:51:03 crc kubenswrapper[4796]: I1202 20:51:03.828219 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrzxs"] Dec 02 20:51:03 crc kubenswrapper[4796]: I1202 20:51:03.828993 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jrzxs" podUID="81febbc4-d060-481c-aef0-51eab2c55935" containerName="registry-server" containerID="cri-o://7117b923fcae14eb68354272f1f3375416c8e05a4dfbf5656d59b1a52cf4cd2c" gracePeriod=2 Dec 02 20:51:04 crc kubenswrapper[4796]: I1202 20:51:04.558724 4796 generic.go:334] "Generic (PLEG): container finished" podID="81febbc4-d060-481c-aef0-51eab2c55935" containerID="7117b923fcae14eb68354272f1f3375416c8e05a4dfbf5656d59b1a52cf4cd2c" exitCode=0 Dec 02 20:51:04 crc kubenswrapper[4796]: I1202 20:51:04.558916 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrzxs" event={"ID":"81febbc4-d060-481c-aef0-51eab2c55935","Type":"ContainerDied","Data":"7117b923fcae14eb68354272f1f3375416c8e05a4dfbf5656d59b1a52cf4cd2c"} Dec 02 20:51:04 crc kubenswrapper[4796]: I1202 20:51:04.843945 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:51:04 crc kubenswrapper[4796]: I1202 20:51:04.966177 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7spw\" (UniqueName: \"kubernetes.io/projected/81febbc4-d060-481c-aef0-51eab2c55935-kube-api-access-m7spw\") pod \"81febbc4-d060-481c-aef0-51eab2c55935\" (UID: \"81febbc4-d060-481c-aef0-51eab2c55935\") " Dec 02 20:51:04 crc kubenswrapper[4796]: I1202 20:51:04.966315 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81febbc4-d060-481c-aef0-51eab2c55935-utilities\") pod \"81febbc4-d060-481c-aef0-51eab2c55935\" (UID: \"81febbc4-d060-481c-aef0-51eab2c55935\") " Dec 02 20:51:04 crc kubenswrapper[4796]: I1202 20:51:04.966357 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81febbc4-d060-481c-aef0-51eab2c55935-catalog-content\") pod \"81febbc4-d060-481c-aef0-51eab2c55935\" (UID: \"81febbc4-d060-481c-aef0-51eab2c55935\") " Dec 02 20:51:04 crc kubenswrapper[4796]: I1202 20:51:04.967479 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81febbc4-d060-481c-aef0-51eab2c55935-utilities" (OuterVolumeSpecName: "utilities") pod "81febbc4-d060-481c-aef0-51eab2c55935" (UID: "81febbc4-d060-481c-aef0-51eab2c55935"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:51:04 crc kubenswrapper[4796]: I1202 20:51:04.979489 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81febbc4-d060-481c-aef0-51eab2c55935-kube-api-access-m7spw" (OuterVolumeSpecName: "kube-api-access-m7spw") pod "81febbc4-d060-481c-aef0-51eab2c55935" (UID: "81febbc4-d060-481c-aef0-51eab2c55935"). InnerVolumeSpecName "kube-api-access-m7spw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 20:51:05 crc kubenswrapper[4796]: I1202 20:51:05.002548 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81febbc4-d060-481c-aef0-51eab2c55935-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81febbc4-d060-481c-aef0-51eab2c55935" (UID: "81febbc4-d060-481c-aef0-51eab2c55935"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 20:51:05 crc kubenswrapper[4796]: I1202 20:51:05.068527 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7spw\" (UniqueName: \"kubernetes.io/projected/81febbc4-d060-481c-aef0-51eab2c55935-kube-api-access-m7spw\") on node \"crc\" DevicePath \"\"" Dec 02 20:51:05 crc kubenswrapper[4796]: I1202 20:51:05.068562 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81febbc4-d060-481c-aef0-51eab2c55935-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 20:51:05 crc kubenswrapper[4796]: I1202 20:51:05.068574 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81febbc4-d060-481c-aef0-51eab2c55935-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 20:51:05 crc kubenswrapper[4796]: I1202 20:51:05.571861 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrzxs" event={"ID":"81febbc4-d060-481c-aef0-51eab2c55935","Type":"ContainerDied","Data":"661b5cf90c97b5fe1b690f0be93f4c622a88060018127a6b9648b6e4a1df8d7d"} Dec 02 20:51:05 crc kubenswrapper[4796]: I1202 20:51:05.571968 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrzxs" Dec 02 20:51:05 crc kubenswrapper[4796]: I1202 20:51:05.572287 4796 scope.go:117] "RemoveContainer" containerID="7117b923fcae14eb68354272f1f3375416c8e05a4dfbf5656d59b1a52cf4cd2c" Dec 02 20:51:05 crc kubenswrapper[4796]: I1202 20:51:05.599564 4796 scope.go:117] "RemoveContainer" containerID="a85922d3e5d149b041ee938054a8dbf2ebd5117e5e247dddb46f2d4960ff4bac" Dec 02 20:51:05 crc kubenswrapper[4796]: I1202 20:51:05.603563 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrzxs"] Dec 02 20:51:05 crc kubenswrapper[4796]: I1202 20:51:05.619998 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrzxs"] Dec 02 20:51:05 crc kubenswrapper[4796]: I1202 20:51:05.625601 4796 scope.go:117] "RemoveContainer" containerID="d6fc998facdc3e2682b10eb83c7b12c5b03e001be26e6795dabf926ec7d88a54" Dec 02 20:51:07 crc kubenswrapper[4796]: I1202 20:51:07.279436 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81febbc4-d060-481c-aef0-51eab2c55935" path="/var/lib/kubelet/pods/81febbc4-d060-481c-aef0-51eab2c55935/volumes" Dec 02 20:51:08 crc kubenswrapper[4796]: I1202 20:51:08.265991 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:51:08 crc kubenswrapper[4796]: E1202 20:51:08.267183 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:51:20 crc kubenswrapper[4796]: I1202 20:51:20.265893 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:51:20 crc kubenswrapper[4796]: E1202 20:51:20.266907 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:51:31 crc kubenswrapper[4796]: I1202 20:51:31.265463 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:51:31 crc kubenswrapper[4796]: E1202 20:51:31.266500 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:51:43 crc kubenswrapper[4796]: I1202 20:51:43.265097 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:51:43 crc kubenswrapper[4796]: E1202 20:51:43.266065 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:51:56 crc kubenswrapper[4796]: I1202 20:51:56.265876 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:51:56 crc kubenswrapper[4796]: E1202 20:51:56.271737 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:52:09 crc kubenswrapper[4796]: I1202 20:52:09.265457 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:52:09 crc kubenswrapper[4796]: E1202 20:52:09.266321 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:52:22 crc kubenswrapper[4796]: I1202 20:52:22.265583 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:52:22 crc kubenswrapper[4796]: E1202 20:52:22.267876 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee" Dec 02 20:52:37 crc kubenswrapper[4796]: I1202 20:52:37.277137 4796 scope.go:117] "RemoveContainer" containerID="d3cce0b8f3056f766811a1454db69248cc4760a9f4f703245864873237ff69e6" Dec 02 20:52:37 crc kubenswrapper[4796]: E1202 20:52:37.277753 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wzhpq_openshift-machine-config-operator(5558dc7c-93f9-4212-bf22-fdec743e47ee)\"" pod="openshift-machine-config-operator/machine-config-daemon-wzhpq" podUID="5558dc7c-93f9-4212-bf22-fdec743e47ee"